00:00:00.001 Started by upstream project "autotest-nightly" build number 4356 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3719 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.223 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.224 The recommended git tool is: git 00:00:00.224 using credential 00000000-0000-0000-0000-000000000002 00:00:00.226 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.263 Fetching changes from the remote Git repository 00:00:00.265 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.304 Using shallow fetch with depth 1 00:00:00.305 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.305 > git --version # timeout=10 00:00:00.339 > git --version # 'git version 2.39.2' 00:00:00.339 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.358 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.358 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.981 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.992 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.002 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.002 > git config core.sparsecheckout # timeout=10 00:00:06.014 > git read-tree -mu HEAD # timeout=10 00:00:06.028 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.055 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.055 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.139 [Pipeline] Start of Pipeline 00:00:06.152 [Pipeline] library 00:00:06.154 Loading library shm_lib@master 00:00:06.154 Library shm_lib@master is cached. Copying from home. 00:00:06.167 [Pipeline] node 00:00:06.179 Running on WFP4 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.181 [Pipeline] { 00:00:06.190 [Pipeline] catchError 00:00:06.191 [Pipeline] { 00:00:06.202 [Pipeline] wrap 00:00:06.209 [Pipeline] { 00:00:06.213 [Pipeline] stage 00:00:06.215 [Pipeline] { (Prologue) 00:00:06.407 [Pipeline] sh 00:00:06.696 + logger -p user.info -t JENKINS-CI 00:00:06.715 [Pipeline] echo 00:00:06.717 Node: WFP4 00:00:06.724 [Pipeline] sh 00:00:07.026 [Pipeline] setCustomBuildProperty 00:00:07.038 [Pipeline] echo 00:00:07.039 Cleanup processes 00:00:07.042 [Pipeline] sh 00:00:07.325 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.325 2371618 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.337 [Pipeline] sh 00:00:07.620 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.620 ++ grep -v 'sudo pgrep' 00:00:07.620 ++ awk '{print $1}' 00:00:07.620 + sudo kill -9 00:00:07.620 + true 00:00:07.635 [Pipeline] cleanWs 00:00:07.650 [WS-CLEANUP] Deleting project workspace... 00:00:07.650 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.657 [WS-CLEANUP] done 00:00:07.662 [Pipeline] setCustomBuildProperty 00:00:07.675 [Pipeline] sh 00:00:07.959 + sudo git config --global --replace-all safe.directory '*' 00:00:08.041 [Pipeline] httpRequest 00:00:08.397 [Pipeline] echo 00:00:08.398 Sorcerer 10.211.164.20 is alive 00:00:08.406 [Pipeline] retry 00:00:08.407 [Pipeline] { 00:00:08.418 [Pipeline] httpRequest 00:00:08.422 HttpMethod: GET 00:00:08.422 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.423 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.436 Response Code: HTTP/1.1 200 OK 00:00:08.437 Success: Status code 200 is in the accepted range: 200,404 00:00:08.437 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.313 [Pipeline] } 00:00:09.331 [Pipeline] // retry 00:00:09.340 [Pipeline] sh 00:00:09.625 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.641 [Pipeline] httpRequest 00:00:10.459 [Pipeline] echo 00:00:10.460 Sorcerer 10.211.164.20 is alive 00:00:10.469 [Pipeline] retry 00:00:10.471 [Pipeline] { 00:00:10.484 [Pipeline] httpRequest 00:00:10.488 HttpMethod: GET 00:00:10.488 URL: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:10.489 Sending request to url: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:10.504 Response Code: HTTP/1.1 200 OK 00:00:10.505 Success: Status code 200 is in the accepted range: 200,404 00:00:10.505 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:12.771 [Pipeline] } 00:01:12.783 [Pipeline] // retry 00:01:12.790 [Pipeline] sh 00:01:13.076 + tar --no-same-owner -xf spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:15.644 [Pipeline] sh 00:01:15.929 + git -C spdk log --oneline -n5 00:01:15.929 e01cb43b8 mk/spdk.common.mk sed the minor version 00:01:15.929 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:01:15.929 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:01:15.929 66289a6db build: use VERSION file for storing version 00:01:15.929 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:01:15.940 [Pipeline] } 00:01:15.952 [Pipeline] // stage 00:01:15.960 [Pipeline] stage 00:01:15.961 [Pipeline] { (Prepare) 00:01:15.974 [Pipeline] writeFile 00:01:15.987 [Pipeline] sh 00:01:16.271 + logger -p user.info -t JENKINS-CI 00:01:16.284 [Pipeline] sh 00:01:16.569 + logger -p user.info -t JENKINS-CI 00:01:16.582 [Pipeline] sh 00:01:16.865 + cat autorun-spdk.conf 00:01:16.865 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.865 SPDK_TEST_NVMF=1 00:01:16.865 SPDK_TEST_NVME_CLI=1 00:01:16.865 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:16.865 SPDK_TEST_NVMF_NICS=e810 00:01:16.865 SPDK_RUN_ASAN=1 00:01:16.865 SPDK_RUN_UBSAN=1 00:01:16.865 NET_TYPE=phy 00:01:16.873 RUN_NIGHTLY=1 00:01:16.877 [Pipeline] readFile 00:01:16.900 [Pipeline] withEnv 00:01:16.902 [Pipeline] { 00:01:16.914 [Pipeline] sh 00:01:17.200 + set -ex 00:01:17.200 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:17.200 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:17.200 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.200 ++ SPDK_TEST_NVMF=1 00:01:17.200 ++ SPDK_TEST_NVME_CLI=1 00:01:17.200 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:17.200 ++ SPDK_TEST_NVMF_NICS=e810 00:01:17.200 ++ SPDK_RUN_ASAN=1 00:01:17.200 ++ SPDK_RUN_UBSAN=1 00:01:17.200 ++ NET_TYPE=phy 00:01:17.200 ++ RUN_NIGHTLY=1 00:01:17.200 + case $SPDK_TEST_NVMF_NICS in 00:01:17.200 + DRIVERS=ice 00:01:17.200 + [[ tcp == \r\d\m\a ]] 00:01:17.200 + [[ -n ice ]] 00:01:17.200 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:17.200 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:17.200 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:17.200 rmmod: ERROR: Module i40iw is not currently loaded 00:01:17.200 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:17.200 + true 00:01:17.200 + for D in $DRIVERS 00:01:17.200 + sudo modprobe ice 00:01:17.200 + exit 0 00:01:17.209 [Pipeline] } 00:01:17.224 [Pipeline] // withEnv 00:01:17.230 [Pipeline] } 00:01:17.244 [Pipeline] // stage 00:01:17.253 [Pipeline] catchError 00:01:17.255 [Pipeline] { 00:01:17.269 [Pipeline] timeout 00:01:17.269 Timeout set to expire in 1 hr 0 min 00:01:17.271 [Pipeline] { 00:01:17.285 [Pipeline] stage 00:01:17.287 [Pipeline] { (Tests) 00:01:17.301 [Pipeline] sh 00:01:17.587 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:17.587 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:17.587 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:17.587 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:17.587 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:17.587 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:17.587 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:17.587 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:17.587 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:17.587 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:17.587 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:17.587 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:17.587 + source /etc/os-release 00:01:17.587 ++ NAME='Fedora Linux' 00:01:17.587 ++ VERSION='39 (Cloud Edition)' 00:01:17.587 ++ ID=fedora 00:01:17.587 ++ VERSION_ID=39 00:01:17.587 ++ VERSION_CODENAME= 00:01:17.587 ++ PLATFORM_ID=platform:f39 00:01:17.587 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:17.587 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:17.587 ++ LOGO=fedora-logo-icon 00:01:17.587 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:17.587 ++ HOME_URL=https://fedoraproject.org/ 00:01:17.587 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:17.587 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:17.587 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:17.587 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:17.587 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:17.587 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:17.588 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:17.588 ++ SUPPORT_END=2024-11-12 00:01:17.588 ++ VARIANT='Cloud Edition' 00:01:17.588 ++ VARIANT_ID=cloud 00:01:17.588 + uname -a 00:01:17.588 Linux spdk-wfp-04 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:01:17.588 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:19.493 Hugepages 00:01:19.493 node hugesize free / total 00:01:19.493 node0 1048576kB 0 / 0 00:01:19.493 node0 2048kB 0 / 0 00:01:19.493 node1 1048576kB 0 / 0 00:01:19.493 node1 2048kB 0 / 0 00:01:19.493 00:01:19.493 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:19.493 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:19.493 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:19.493 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:19.493 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:19.493 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:19.493 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:19.493 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:19.493 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:19.753 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:19.753 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:19.753 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:19.753 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:19.753 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:19.753 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:19.753 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:19.753 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:19.753 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:19.753 + rm -f /tmp/spdk-ld-path 00:01:19.753 + source autorun-spdk.conf 00:01:19.753 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.753 ++ SPDK_TEST_NVMF=1 00:01:19.753 ++ SPDK_TEST_NVME_CLI=1 00:01:19.753 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.753 ++ SPDK_TEST_NVMF_NICS=e810 00:01:19.753 ++ SPDK_RUN_ASAN=1 00:01:19.753 ++ SPDK_RUN_UBSAN=1 00:01:19.753 ++ NET_TYPE=phy 00:01:19.753 ++ RUN_NIGHTLY=1 00:01:19.753 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:19.753 + [[ -n '' ]] 00:01:19.753 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:19.753 + for M in /var/spdk/build-*-manifest.txt 00:01:19.753 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:19.753 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:19.753 + for M in /var/spdk/build-*-manifest.txt 00:01:19.753 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:19.753 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:19.753 + for M in /var/spdk/build-*-manifest.txt 00:01:19.753 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:19.753 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:19.753 ++ uname 00:01:19.753 + [[ Linux == \L\i\n\u\x ]] 00:01:19.753 + sudo dmesg -T 00:01:19.753 + sudo dmesg --clear 00:01:19.753 + dmesg_pid=2373080 00:01:19.753 + [[ Fedora Linux == FreeBSD ]] 00:01:19.753 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:19.753 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:19.753 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:19.753 + [[ -x /usr/src/fio-static/fio ]] 00:01:19.753 + export FIO_BIN=/usr/src/fio-static/fio 00:01:19.753 + FIO_BIN=/usr/src/fio-static/fio 00:01:19.753 + sudo dmesg -Tw 00:01:19.753 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:19.753 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:19.753 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:19.753 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:19.753 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:19.753 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:19.753 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:19.753 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:19.753 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:19.753 03:12:20 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:19.753 03:12:20 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:19.753 03:12:20 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.753 03:12:20 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:19.753 03:12:20 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:19.753 03:12:20 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.753 03:12:20 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:19.753 03:12:20 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_RUN_ASAN=1 00:01:19.753 03:12:20 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:19.753 03:12:20 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:19.753 03:12:20 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=1 00:01:19.753 03:12:20 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:19.753 03:12:20 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:20.018 03:12:20 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:20.018 03:12:20 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:20.018 03:12:20 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:20.018 03:12:20 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:20.018 03:12:20 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:20.018 03:12:20 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:20.018 03:12:20 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.018 03:12:20 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.019 03:12:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.019 03:12:20 -- paths/export.sh@5 -- $ export PATH 00:01:20.019 03:12:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.019 03:12:20 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:20.019 03:12:20 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:20.019 03:12:21 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734055941.XXXXXX 00:01:20.019 03:12:21 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734055941.uB5tY5 00:01:20.019 03:12:21 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:20.019 03:12:21 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:20.019 03:12:21 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:20.019 03:12:21 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:20.019 03:12:21 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:20.019 03:12:21 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:20.019 03:12:21 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:20.019 03:12:21 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.019 03:12:21 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:20.019 03:12:21 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:20.019 03:12:21 -- pm/common@17 -- $ local monitor 00:01:20.019 03:12:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:20.019 03:12:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:20.019 03:12:21 -- pm/common@21 -- $ date +%s 00:01:20.019 03:12:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:20.019 03:12:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:20.019 03:12:21 -- pm/common@21 -- $ date +%s 00:01:20.019 03:12:21 -- pm/common@25 -- $ sleep 1 00:01:20.019 03:12:21 -- pm/common@21 -- $ date +%s 00:01:20.019 03:12:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734055941 00:01:20.019 03:12:21 -- pm/common@21 -- $ date +%s 00:01:20.019 03:12:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734055941 00:01:20.019 03:12:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734055941 00:01:20.019 03:12:21 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734055941 00:01:20.019 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734055941_collect-cpu-load.pm.log 00:01:20.019 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734055941_collect-vmstat.pm.log 00:01:20.019 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734055941_collect-cpu-temp.pm.log 00:01:20.019 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734055941_collect-bmc-pm.bmc.pm.log 00:01:20.958 03:12:22 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:20.958 03:12:22 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:20.958 03:12:22 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:20.958 03:12:22 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:20.958 03:12:22 -- spdk/autobuild.sh@16 -- $ date -u 00:01:20.958 Fri Dec 13 02:12:22 AM UTC 2024 00:01:20.958 03:12:22 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:20.958 v25.01-rc1-2-ge01cb43b8 00:01:20.958 03:12:22 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:20.958 03:12:22 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:20.958 03:12:22 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:20.958 03:12:22 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:20.958 03:12:22 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.958 ************************************ 00:01:20.958 START TEST asan 00:01:20.958 ************************************ 00:01:20.958 03:12:22 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:20.958 using asan 00:01:20.958 00:01:20.958 real 0m0.000s 00:01:20.958 user 0m0.000s 00:01:20.958 sys 0m0.000s 00:01:20.958 03:12:22 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:20.958 03:12:22 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:20.958 ************************************ 00:01:20.958 END TEST asan 00:01:20.958 ************************************ 00:01:20.958 03:12:22 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:20.958 03:12:22 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:20.958 03:12:22 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:20.958 03:12:22 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:20.958 03:12:22 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.958 ************************************ 00:01:20.958 START TEST ubsan 00:01:20.958 ************************************ 00:01:20.958 03:12:22 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:20.958 using ubsan 00:01:20.958 00:01:20.958 real 0m0.000s 00:01:20.958 user 0m0.000s 00:01:20.958 sys 0m0.000s 00:01:20.958 03:12:22 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:20.958 03:12:22 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:20.958 ************************************ 00:01:20.958 END TEST ubsan 00:01:20.958 ************************************ 00:01:21.217 03:12:22 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:21.217 03:12:22 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:21.217 03:12:22 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:21.217 03:12:22 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:21.217 03:12:22 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:21.217 03:12:22 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:21.218 03:12:22 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:21.218 03:12:22 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:21.218 03:12:22 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:21.218 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:21.218 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:21.509 Using 'verbs' RDMA provider 00:01:34.661 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:44.644 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:45.163 Creating mk/config.mk...done. 00:01:45.163 Creating mk/cc.flags.mk...done. 00:01:45.163 Type 'make' to build. 00:01:45.163 03:12:46 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:01:45.163 03:12:46 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:45.163 03:12:46 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:45.163 03:12:46 -- common/autotest_common.sh@10 -- $ set +x 00:01:45.163 ************************************ 00:01:45.163 START TEST make 00:01:45.163 ************************************ 00:01:45.163 03:12:46 make -- common/autotest_common.sh@1129 -- $ make -j96 00:01:55.162 The Meson build system 00:01:55.162 Version: 1.5.0 00:01:55.162 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:55.162 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:55.162 Build type: native build 00:01:55.162 Program cat found: YES (/usr/bin/cat) 00:01:55.162 Project name: DPDK 00:01:55.162 Project version: 24.03.0 00:01:55.162 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:55.162 C linker for the host machine: cc ld.bfd 2.40-14 00:01:55.162 Host machine cpu family: x86_64 00:01:55.162 Host machine cpu: x86_64 00:01:55.162 Message: ## Building in Developer Mode ## 00:01:55.162 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:55.162 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:55.162 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:55.162 Program python3 found: YES (/usr/bin/python3) 00:01:55.162 Program cat found: YES (/usr/bin/cat) 00:01:55.162 Compiler for C supports arguments -march=native: YES 00:01:55.162 Checking for size of "void *" : 8 00:01:55.162 Checking for size of "void *" : 8 (cached) 00:01:55.162 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:55.162 Library m found: YES 00:01:55.162 Library numa found: YES 00:01:55.162 Has header "numaif.h" : YES 00:01:55.162 Library fdt found: NO 00:01:55.162 Library execinfo found: NO 00:01:55.162 Has header "execinfo.h" : YES 00:01:55.162 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:55.162 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:55.162 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:55.162 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:55.162 Run-time dependency openssl found: YES 3.1.1 00:01:55.162 Run-time dependency libpcap found: YES 1.10.4 00:01:55.162 Has header "pcap.h" with dependency libpcap: YES 00:01:55.162 Compiler for C supports arguments -Wcast-qual: YES 00:01:55.162 Compiler for C supports arguments -Wdeprecated: YES 00:01:55.162 Compiler for C supports arguments -Wformat: YES 00:01:55.162 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:55.162 Compiler for C supports arguments -Wformat-security: NO 00:01:55.162 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:55.162 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:55.162 Compiler for C supports arguments -Wnested-externs: YES 00:01:55.162 Compiler for C supports arguments -Wold-style-definition: YES 00:01:55.162 Compiler for C supports arguments -Wpointer-arith: YES 00:01:55.162 Compiler for C supports arguments -Wsign-compare: YES 00:01:55.162 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:55.162 Compiler for C supports arguments -Wundef: YES 00:01:55.162 Compiler for C supports arguments -Wwrite-strings: YES 00:01:55.162 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:55.162 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:55.162 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:55.162 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:55.162 Program objdump found: YES (/usr/bin/objdump) 00:01:55.162 Compiler for C supports arguments -mavx512f: YES 00:01:55.162 Checking if "AVX512 checking" compiles: YES 00:01:55.162 Fetching value of define "__SSE4_2__" : 1 00:01:55.162 Fetching value of define "__AES__" : 1 00:01:55.162 Fetching value of define "__AVX__" : 1 00:01:55.162 Fetching value of define "__AVX2__" : 1 00:01:55.162 Fetching value of define "__AVX512BW__" : 1 00:01:55.162 Fetching value of define "__AVX512CD__" : 1 00:01:55.162 Fetching value of define "__AVX512DQ__" : 1 00:01:55.162 Fetching value of define "__AVX512F__" : 1 00:01:55.162 Fetching value of define "__AVX512VL__" : 1 00:01:55.162 Fetching value of define "__PCLMUL__" : 1 00:01:55.162 Fetching value of define "__RDRND__" : 1 00:01:55.162 Fetching value of define "__RDSEED__" : 1 00:01:55.162 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:55.162 Fetching value of define "__znver1__" : (undefined) 00:01:55.162 Fetching value of define "__znver2__" : (undefined) 00:01:55.162 Fetching value of define "__znver3__" : (undefined) 00:01:55.162 Fetching value of define "__znver4__" : (undefined) 00:01:55.162 Library asan found: YES 00:01:55.162 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:55.162 Message: lib/log: Defining dependency "log" 00:01:55.162 Message: lib/kvargs: Defining dependency "kvargs" 00:01:55.162 Message: lib/telemetry: Defining dependency "telemetry" 00:01:55.162 Library rt found: YES 00:01:55.162 Checking for function "getentropy" : NO 00:01:55.162 Message: lib/eal: Defining dependency "eal" 00:01:55.162 Message: lib/ring: Defining dependency "ring" 00:01:55.162 Message: lib/rcu: Defining dependency "rcu" 00:01:55.162 Message: lib/mempool: Defining dependency "mempool" 00:01:55.162 Message: lib/mbuf: Defining dependency "mbuf" 00:01:55.162 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:55.162 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:55.162 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:55.162 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:55.162 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:55.162 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:55.162 Compiler for C supports arguments -mpclmul: YES 00:01:55.162 Compiler for C supports arguments -maes: YES 00:01:55.162 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:55.162 Compiler for C supports arguments -mavx512bw: YES 00:01:55.162 Compiler for C supports arguments -mavx512dq: YES 00:01:55.162 Compiler for C supports arguments -mavx512vl: YES 00:01:55.162 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:55.162 Compiler for C supports arguments -mavx2: YES 00:01:55.162 Compiler for C supports arguments -mavx: YES 00:01:55.162 Message: lib/net: Defining dependency "net" 00:01:55.162 Message: lib/meter: Defining dependency "meter" 00:01:55.162 Message: lib/ethdev: Defining dependency "ethdev" 00:01:55.162 Message: lib/pci: Defining dependency "pci" 00:01:55.162 Message: lib/cmdline: Defining dependency "cmdline" 00:01:55.162 Message: lib/hash: Defining dependency "hash" 00:01:55.162 Message: lib/timer: Defining dependency "timer" 00:01:55.162 Message: lib/compressdev: Defining dependency "compressdev" 00:01:55.162 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:55.162 Message: lib/dmadev: Defining dependency "dmadev" 00:01:55.162 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:55.162 Message: lib/power: Defining dependency "power" 00:01:55.162 Message: lib/reorder: Defining dependency "reorder" 00:01:55.162 Message: lib/security: Defining dependency "security" 00:01:55.162 Has header "linux/userfaultfd.h" : YES 00:01:55.162 Has header "linux/vduse.h" : YES 00:01:55.162 Message: lib/vhost: Defining dependency "vhost" 00:01:55.162 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:55.162 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:55.162 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:55.162 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:55.162 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:55.162 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:55.162 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:55.162 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:55.162 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:55.162 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:55.162 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:55.162 Configuring doxy-api-html.conf using configuration 00:01:55.162 Configuring doxy-api-man.conf using configuration 00:01:55.162 Program mandb found: YES (/usr/bin/mandb) 00:01:55.162 Program sphinx-build found: NO 00:01:55.162 Configuring rte_build_config.h using configuration 00:01:55.162 Message: 00:01:55.163 ================= 00:01:55.163 Applications Enabled 00:01:55.163 ================= 00:01:55.163 00:01:55.163 apps: 00:01:55.163 00:01:55.163 00:01:55.163 Message: 00:01:55.163 ================= 00:01:55.163 Libraries Enabled 00:01:55.163 ================= 00:01:55.163 00:01:55.163 libs: 00:01:55.163 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:55.163 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:55.163 cryptodev, dmadev, power, reorder, security, vhost, 00:01:55.163 00:01:55.163 Message: 00:01:55.163 =============== 00:01:55.163 Drivers Enabled 00:01:55.163 =============== 00:01:55.163 00:01:55.163 common: 00:01:55.163 00:01:55.163 bus: 00:01:55.163 pci, vdev, 00:01:55.163 mempool: 00:01:55.163 ring, 00:01:55.163 dma: 00:01:55.163 00:01:55.163 net: 00:01:55.163 00:01:55.163 crypto: 00:01:55.163 00:01:55.163 compress: 00:01:55.163 00:01:55.163 vdpa: 00:01:55.163 00:01:55.163 00:01:55.163 Message: 00:01:55.163 ================= 00:01:55.163 Content Skipped 00:01:55.163 ================= 00:01:55.163 00:01:55.163 apps: 00:01:55.163 dumpcap: explicitly disabled via build config 00:01:55.163 graph: explicitly disabled via build config 00:01:55.163 pdump: explicitly disabled via build config 00:01:55.163 proc-info: explicitly disabled via build config 00:01:55.163 test-acl: explicitly disabled via build config 00:01:55.163 test-bbdev: explicitly disabled via build config 00:01:55.163 test-cmdline: explicitly disabled via build config 00:01:55.163 test-compress-perf: explicitly disabled via build config 00:01:55.163 test-crypto-perf: explicitly disabled via build config 00:01:55.163 test-dma-perf: explicitly disabled via build config 00:01:55.163 test-eventdev: explicitly disabled via build config 00:01:55.163 test-fib: explicitly disabled via build config 00:01:55.163 test-flow-perf: explicitly disabled via build config 00:01:55.163 test-gpudev: explicitly disabled via build config 00:01:55.163 test-mldev: explicitly disabled via build config 00:01:55.163 test-pipeline: explicitly disabled via build config 00:01:55.163 test-pmd: explicitly disabled via build config 00:01:55.163 test-regex: explicitly disabled via build config 00:01:55.163 test-sad: explicitly disabled via build config 00:01:55.163 test-security-perf: explicitly disabled via build config 00:01:55.163 00:01:55.163 libs: 00:01:55.163 argparse: explicitly disabled via build config 00:01:55.163 metrics: explicitly disabled via build config 00:01:55.163 acl: explicitly disabled via build config 00:01:55.163 bbdev: explicitly disabled via build config 00:01:55.163 bitratestats: explicitly disabled via build config 00:01:55.163 bpf: explicitly disabled via build config 00:01:55.163 cfgfile: explicitly disabled via build config 00:01:55.163 distributor: explicitly disabled via build config 00:01:55.163 efd: explicitly disabled via build config 00:01:55.163 eventdev: explicitly disabled via build config 00:01:55.163 dispatcher: explicitly disabled via build config 00:01:55.163 gpudev: explicitly disabled via build config 00:01:55.163 gro: explicitly disabled via build config 00:01:55.163 gso: explicitly disabled via build config 00:01:55.163 ip_frag: explicitly disabled via build config 00:01:55.163 jobstats: explicitly disabled via build config 00:01:55.163 latencystats: explicitly disabled via build config 00:01:55.163 lpm: explicitly disabled via build config 00:01:55.163 member: explicitly disabled via build config 00:01:55.163 pcapng: explicitly disabled via build config 00:01:55.163 rawdev: explicitly disabled via build config 00:01:55.163 regexdev: explicitly disabled via build config 00:01:55.163 mldev: explicitly disabled via build config 00:01:55.163 rib: explicitly disabled via build config 00:01:55.163 sched: explicitly disabled via build config 00:01:55.163 stack: explicitly disabled via build config 00:01:55.163 ipsec: explicitly disabled via build config 00:01:55.163 pdcp: explicitly disabled via build config 00:01:55.163 fib: explicitly disabled via build config 00:01:55.163 port: explicitly disabled via build config 00:01:55.163 pdump: explicitly disabled via build config 00:01:55.163 table: explicitly disabled via build config 00:01:55.163 pipeline: explicitly disabled via build config 00:01:55.163 graph: explicitly disabled via build config 00:01:55.163 node: explicitly disabled via build config 00:01:55.163 00:01:55.163 drivers: 00:01:55.163 common/cpt: not in enabled drivers build config 00:01:55.163 common/dpaax: not in enabled drivers build config 00:01:55.163 common/iavf: not in enabled drivers build config 00:01:55.163 common/idpf: not in enabled drivers build config 00:01:55.163 common/ionic: not in enabled drivers build config 00:01:55.163 common/mvep: not in enabled drivers build config 00:01:55.163 common/octeontx: not in enabled drivers build config 00:01:55.163 bus/auxiliary: not in enabled drivers build config 00:01:55.163 bus/cdx: not in enabled drivers build config 00:01:55.163 bus/dpaa: not in enabled drivers build config 00:01:55.163 bus/fslmc: not in enabled drivers build config 00:01:55.163 bus/ifpga: not in enabled drivers build config 00:01:55.163 bus/platform: not in enabled drivers build config 00:01:55.163 bus/uacce: not in enabled drivers build config 00:01:55.163 bus/vmbus: not in enabled drivers build config 00:01:55.163 common/cnxk: not in enabled drivers build config 00:01:55.163 common/mlx5: not in enabled drivers build config 00:01:55.163 common/nfp: not in enabled drivers build config 00:01:55.163 common/nitrox: not in enabled drivers build config 00:01:55.163 common/qat: not in enabled drivers build config 00:01:55.163 common/sfc_efx: not in enabled drivers build config 00:01:55.163 mempool/bucket: not in enabled drivers build config 00:01:55.163 mempool/cnxk: not in enabled drivers build config 00:01:55.163 mempool/dpaa: not in enabled drivers build config 00:01:55.163 mempool/dpaa2: not in enabled drivers build config 00:01:55.163 mempool/octeontx: not in enabled drivers build config 00:01:55.163 mempool/stack: not in enabled drivers build config 00:01:55.163 dma/cnxk: not in enabled drivers build config 00:01:55.163 dma/dpaa: not in enabled drivers build config 00:01:55.163 dma/dpaa2: not in enabled drivers build config 00:01:55.163 dma/hisilicon: not in enabled drivers build config 00:01:55.163 dma/idxd: not in enabled drivers build config 00:01:55.163 dma/ioat: not in enabled drivers build config 00:01:55.163 dma/skeleton: not in enabled drivers build config 00:01:55.163 net/af_packet: not in enabled drivers build config 00:01:55.163 net/af_xdp: not in enabled drivers build config 00:01:55.163 net/ark: not in enabled drivers build config 00:01:55.163 net/atlantic: not in enabled drivers build config 00:01:55.163 net/avp: not in enabled drivers build config 00:01:55.163 net/axgbe: not in enabled drivers build config 00:01:55.163 net/bnx2x: not in enabled drivers build config 00:01:55.163 net/bnxt: not in enabled drivers build config 00:01:55.163 net/bonding: not in enabled drivers build config 00:01:55.163 net/cnxk: not in enabled drivers build config 00:01:55.163 net/cpfl: not in enabled drivers build config 00:01:55.163 net/cxgbe: not in enabled drivers build config 00:01:55.163 net/dpaa: not in enabled drivers build config 00:01:55.163 net/dpaa2: not in enabled drivers build config 00:01:55.163 net/e1000: not in enabled drivers build config 00:01:55.163 net/ena: not in enabled drivers build config 00:01:55.163 net/enetc: not in enabled drivers build config 00:01:55.163 net/enetfec: not in enabled drivers build config 00:01:55.163 net/enic: not in enabled drivers build config 00:01:55.163 net/failsafe: not in enabled drivers build config 00:01:55.163 net/fm10k: not in enabled drivers build config 00:01:55.163 net/gve: not in enabled drivers build config 00:01:55.163 net/hinic: not in enabled drivers build config 00:01:55.163 net/hns3: not in enabled drivers build config 00:01:55.163 net/i40e: not in enabled drivers build config 00:01:55.163 net/iavf: not in enabled drivers build config 00:01:55.163 net/ice: not in enabled drivers build config 00:01:55.163 net/idpf: not in enabled drivers build config 00:01:55.163 net/igc: not in enabled drivers build config 00:01:55.163 net/ionic: not in enabled drivers build config 00:01:55.163 net/ipn3ke: not in enabled drivers build config 00:01:55.163 net/ixgbe: not in enabled drivers build config 00:01:55.163 net/mana: not in enabled drivers build config 00:01:55.163 net/memif: not in enabled drivers build config 00:01:55.163 net/mlx4: not in enabled drivers build config 00:01:55.163 net/mlx5: not in enabled drivers build config 00:01:55.163 net/mvneta: not in enabled drivers build config 00:01:55.163 net/mvpp2: not in enabled drivers build config 00:01:55.163 net/netvsc: not in enabled drivers build config 00:01:55.163 net/nfb: not in enabled drivers build config 00:01:55.163 net/nfp: not in enabled drivers build config 00:01:55.163 net/ngbe: not in enabled drivers build config 00:01:55.163 net/null: not in enabled drivers build config 00:01:55.163 net/octeontx: not in enabled drivers build config 00:01:55.163 net/octeon_ep: not in enabled drivers build config 00:01:55.163 net/pcap: not in enabled drivers build config 00:01:55.163 net/pfe: not in enabled drivers build config 00:01:55.163 net/qede: not in enabled drivers build config 00:01:55.163 net/ring: not in enabled drivers build config 00:01:55.163 net/sfc: not in enabled drivers build config 00:01:55.163 net/softnic: not in enabled drivers build config 00:01:55.163 net/tap: not in enabled drivers build config 00:01:55.163 net/thunderx: not in enabled drivers build config 00:01:55.163 net/txgbe: not in enabled drivers build config 00:01:55.163 net/vdev_netvsc: not in enabled drivers build config 00:01:55.163 net/vhost: not in enabled drivers build config 00:01:55.163 net/virtio: not in enabled drivers build config 00:01:55.163 net/vmxnet3: not in enabled drivers build config 00:01:55.163 raw/*: missing internal dependency, "rawdev" 00:01:55.163 crypto/armv8: not in enabled drivers build config 00:01:55.163 crypto/bcmfs: not in enabled drivers build config 00:01:55.163 crypto/caam_jr: not in enabled drivers build config 00:01:55.163 crypto/ccp: not in enabled drivers build config 00:01:55.163 crypto/cnxk: not in enabled drivers build config 00:01:55.163 crypto/dpaa_sec: not in enabled drivers build config 00:01:55.163 crypto/dpaa2_sec: not in enabled drivers build config 00:01:55.163 crypto/ipsec_mb: not in enabled drivers build config 00:01:55.163 crypto/mlx5: not in enabled drivers build config 00:01:55.163 crypto/mvsam: not in enabled drivers build config 00:01:55.163 crypto/nitrox: not in enabled drivers build config 00:01:55.163 crypto/null: not in enabled drivers build config 00:01:55.164 crypto/octeontx: not in enabled drivers build config 00:01:55.164 crypto/openssl: not in enabled drivers build config 00:01:55.164 crypto/scheduler: not in enabled drivers build config 00:01:55.164 crypto/uadk: not in enabled drivers build config 00:01:55.164 crypto/virtio: not in enabled drivers build config 00:01:55.164 compress/isal: not in enabled drivers build config 00:01:55.164 compress/mlx5: not in enabled drivers build config 00:01:55.164 compress/nitrox: not in enabled drivers build config 00:01:55.164 compress/octeontx: not in enabled drivers build config 00:01:55.164 compress/zlib: not in enabled drivers build config 00:01:55.164 regex/*: missing internal dependency, "regexdev" 00:01:55.164 ml/*: missing internal dependency, "mldev" 00:01:55.164 vdpa/ifc: not in enabled drivers build config 00:01:55.164 vdpa/mlx5: not in enabled drivers build config 00:01:55.164 vdpa/nfp: not in enabled drivers build config 00:01:55.164 vdpa/sfc: not in enabled drivers build config 00:01:55.164 event/*: missing internal dependency, "eventdev" 00:01:55.164 baseband/*: missing internal dependency, "bbdev" 00:01:55.164 gpu/*: missing internal dependency, "gpudev" 00:01:55.164 00:01:55.164 00:01:55.164 Build targets in project: 85 00:01:55.164 00:01:55.164 DPDK 24.03.0 00:01:55.164 00:01:55.164 User defined options 00:01:55.164 buildtype : debug 00:01:55.164 default_library : shared 00:01:55.164 libdir : lib 00:01:55.164 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:55.164 b_sanitize : address 00:01:55.164 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:55.164 c_link_args : 00:01:55.164 cpu_instruction_set: native 00:01:55.164 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:55.164 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:55.164 enable_docs : false 00:01:55.164 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:55.164 enable_kmods : false 00:01:55.164 max_lcores : 128 00:01:55.164 tests : false 00:01:55.164 00:01:55.164 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:55.164 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:55.164 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:55.164 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:55.164 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:55.164 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:55.164 [5/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:55.164 [6/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:55.164 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:55.164 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:55.164 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:55.164 [10/268] Linking static target lib/librte_kvargs.a 00:01:55.164 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:55.164 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:55.164 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:55.164 [14/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:55.164 [15/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:55.164 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:55.164 [17/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:55.164 [18/268] Linking static target lib/librte_log.a 00:01:55.164 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:55.164 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:55.164 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:55.164 [22/268] Linking static target lib/librte_pci.a 00:01:55.164 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:55.164 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:55.164 [25/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:55.164 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:55.164 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:55.164 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:55.424 [29/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:55.424 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:55.424 [31/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:55.424 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:55.424 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:55.424 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:55.424 [35/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:55.424 [36/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:55.424 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:55.424 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:55.424 [39/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:55.424 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:55.424 [41/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:55.424 [42/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:55.424 [43/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:55.424 [44/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:55.424 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:55.424 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:55.424 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:55.424 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:55.424 [49/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:55.424 [50/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:55.424 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:55.424 [52/268] Linking static target lib/librte_meter.a 00:01:55.424 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:55.424 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:55.424 [55/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:55.424 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:55.424 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:55.424 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:55.424 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:55.424 [60/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:55.424 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:55.424 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:55.424 [63/268] Linking static target lib/librte_ring.a 00:01:55.424 [64/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:55.424 [65/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:55.424 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:55.424 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:55.424 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:55.424 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:55.424 [70/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:55.424 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:55.424 [72/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:55.424 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:55.424 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:55.424 [75/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.424 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:55.424 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:55.424 [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:55.424 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:55.424 [80/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:55.424 [81/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:55.424 [82/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:55.424 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:55.424 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:55.424 [85/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:55.424 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:55.424 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:55.424 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:55.424 [89/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:55.424 [90/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:55.424 [91/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:55.424 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:55.424 [93/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:55.424 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:55.683 [95/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:55.683 [96/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:55.683 [97/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:55.683 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:55.683 [99/268] Linking static target lib/librte_telemetry.a 00:01:55.683 [100/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:55.683 [101/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:55.683 [102/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:55.683 [103/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:55.683 [104/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:55.683 [105/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:55.683 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:55.683 [107/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:55.683 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:55.683 [109/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:55.683 [110/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.683 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:55.683 [112/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:55.683 [113/268] Linking static target lib/librte_cmdline.a 00:01:55.683 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:55.683 [115/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:55.683 [116/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:55.683 [117/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:55.683 [118/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:55.683 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:55.683 [120/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:55.683 [121/268] Linking static target lib/librte_mempool.a 00:01:55.683 [122/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:55.683 [123/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:55.683 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:55.683 [125/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.683 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:55.683 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:55.683 [128/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:55.683 [129/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:55.683 [130/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:55.683 [131/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.683 [132/268] Linking static target lib/librte_eal.a 00:01:55.683 [133/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:55.683 [134/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:55.683 [135/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:55.942 [136/268] Linking static target lib/librte_timer.a 00:01:55.942 [137/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.942 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:55.942 [139/268] Linking target lib/librte_log.so.24.1 00:01:55.942 [140/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:55.942 [141/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:55.942 [142/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:55.942 [143/268] Linking static target lib/librte_rcu.a 00:01:55.942 [144/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:55.942 [145/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:55.942 [146/268] Linking static target lib/librte_net.a 00:01:55.942 [147/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:55.942 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:55.942 [149/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:55.942 [150/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:55.942 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:55.942 [152/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:55.942 [153/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:55.942 [154/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:55.942 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:55.942 [156/268] Linking static target lib/librte_dmadev.a 00:01:55.942 [157/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:55.942 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:55.942 [159/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:55.942 [160/268] Linking target lib/librte_kvargs.so.24.1 00:01:55.942 [161/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:55.942 [162/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:55.942 [163/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:55.942 [164/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:55.942 [165/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:55.942 [166/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:56.201 [167/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:56.201 [168/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:56.201 [169/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.201 [170/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:56.201 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:56.201 [172/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:56.201 [173/268] Linking static target lib/librte_compressdev.a 00:01:56.201 [174/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:56.201 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:56.201 [176/268] Linking target lib/librte_telemetry.so.24.1 00:01:56.201 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:56.201 [178/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:56.201 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:56.201 [180/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:56.201 [181/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:56.201 [182/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:56.201 [183/268] Linking static target lib/librte_security.a 00:01:56.201 [184/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:56.201 [185/268] Linking static target drivers/librte_bus_vdev.a 00:01:56.201 [186/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:56.201 [187/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.201 [188/268] Linking static target lib/librte_power.a 00:01:56.201 [189/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.201 [190/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.201 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:56.201 [192/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:56.201 [193/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:56.201 [194/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:56.201 [195/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:56.201 [196/268] Linking static target lib/librte_mbuf.a 00:01:56.459 [197/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:56.459 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:56.459 [199/268] Linking static target lib/librte_reorder.a 00:01:56.459 [200/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:56.459 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:56.459 [202/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.459 [203/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:56.459 [204/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:56.459 [205/268] Linking static target drivers/librte_bus_pci.a 00:01:56.459 [206/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:56.459 [207/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:56.459 [208/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:56.459 [209/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:56.459 [210/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.459 [211/268] Linking static target lib/librte_hash.a 00:01:56.459 [212/268] Linking static target drivers/librte_mempool_ring.a 00:01:56.717 [213/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.717 [214/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.717 [215/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:56.717 [216/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.717 [217/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.717 [218/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.975 [219/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:56.975 [220/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.975 [221/268] Linking static target lib/librte_cryptodev.a 00:01:56.975 [222/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.234 [223/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.493 [224/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.493 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:57.493 [226/268] Linking static target lib/librte_ethdev.a 00:01:58.868 [227/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:58.868 [228/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.155 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:02.155 [230/268] Linking static target lib/librte_vhost.a 00:02:03.533 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.434 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.434 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.434 [234/268] Linking target lib/librte_eal.so.24.1 00:02:05.434 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:05.693 [236/268] Linking target lib/librte_ring.so.24.1 00:02:05.693 [237/268] Linking target lib/librte_meter.so.24.1 00:02:05.693 [238/268] Linking target lib/librte_timer.so.24.1 00:02:05.693 [239/268] Linking target lib/librte_pci.so.24.1 00:02:05.693 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:05.693 [241/268] Linking target lib/librte_dmadev.so.24.1 00:02:05.693 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:05.693 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:05.693 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:05.693 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:05.693 [246/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:05.693 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:05.693 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:05.693 [249/268] Linking target lib/librte_rcu.so.24.1 00:02:05.952 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:05.952 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:05.952 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:05.952 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:05.952 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:06.210 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:06.210 [256/268] Linking target lib/librte_reorder.so.24.1 00:02:06.210 [257/268] Linking target lib/librte_net.so.24.1 00:02:06.210 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:06.210 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:06.210 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:06.210 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:06.210 [262/268] Linking target lib/librte_hash.so.24.1 00:02:06.210 [263/268] Linking target lib/librte_security.so.24.1 00:02:06.468 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:06.468 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:06.468 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:06.468 [267/268] Linking target lib/librte_power.so.24.1 00:02:06.468 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:06.468 INFO: autodetecting backend as ninja 00:02:06.468 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:18.678 CC lib/log/log_flags.o 00:02:18.678 CC lib/log/log.o 00:02:18.678 CC lib/log/log_deprecated.o 00:02:18.678 CC lib/ut/ut.o 00:02:18.678 CC lib/ut_mock/mock.o 00:02:18.678 LIB libspdk_ut.a 00:02:18.678 LIB libspdk_log.a 00:02:18.678 SO libspdk_ut.so.2.0 00:02:18.678 LIB libspdk_ut_mock.a 00:02:18.678 SO libspdk_log.so.7.1 00:02:18.678 SO libspdk_ut_mock.so.6.0 00:02:18.678 SYMLINK libspdk_ut.so 00:02:18.678 SYMLINK libspdk_log.so 00:02:18.678 SYMLINK libspdk_ut_mock.so 00:02:18.678 CXX lib/trace_parser/trace.o 00:02:18.678 CC lib/dma/dma.o 00:02:18.678 CC lib/util/base64.o 00:02:18.678 CC lib/util/bit_array.o 00:02:18.678 CC lib/util/cpuset.o 00:02:18.678 CC lib/util/crc16.o 00:02:18.678 CC lib/util/crc32c.o 00:02:18.678 CC lib/util/crc32.o 00:02:18.678 CC lib/util/crc64.o 00:02:18.678 CC lib/util/crc32_ieee.o 00:02:18.678 CC lib/util/fd_group.o 00:02:18.678 CC lib/util/dif.o 00:02:18.678 CC lib/util/fd.o 00:02:18.678 CC lib/util/hexlify.o 00:02:18.678 CC lib/util/file.o 00:02:18.678 CC lib/util/iov.o 00:02:18.678 CC lib/util/math.o 00:02:18.678 CC lib/util/net.o 00:02:18.678 CC lib/util/pipe.o 00:02:18.678 CC lib/util/strerror_tls.o 00:02:18.678 CC lib/util/string.o 00:02:18.678 CC lib/util/uuid.o 00:02:18.678 CC lib/util/xor.o 00:02:18.678 CC lib/util/md5.o 00:02:18.678 CC lib/util/zipf.o 00:02:18.678 CC lib/ioat/ioat.o 00:02:18.678 CC lib/vfio_user/host/vfio_user.o 00:02:18.678 CC lib/vfio_user/host/vfio_user_pci.o 00:02:18.678 LIB libspdk_dma.a 00:02:18.678 SO libspdk_dma.so.5.0 00:02:18.678 SYMLINK libspdk_dma.so 00:02:18.678 LIB libspdk_ioat.a 00:02:18.678 SO libspdk_ioat.so.7.0 00:02:18.678 SYMLINK libspdk_ioat.so 00:02:18.678 LIB libspdk_vfio_user.a 00:02:18.678 SO libspdk_vfio_user.so.5.0 00:02:18.678 SYMLINK libspdk_vfio_user.so 00:02:18.678 LIB libspdk_util.a 00:02:18.678 SO libspdk_util.so.10.1 00:02:18.678 SYMLINK libspdk_util.so 00:02:18.678 LIB libspdk_trace_parser.a 00:02:18.678 SO libspdk_trace_parser.so.6.0 00:02:18.678 SYMLINK libspdk_trace_parser.so 00:02:18.678 CC lib/vmd/vmd.o 00:02:18.678 CC lib/vmd/led.o 00:02:18.679 CC lib/json/json_parse.o 00:02:18.679 CC lib/json/json_write.o 00:02:18.679 CC lib/json/json_util.o 00:02:18.679 CC lib/conf/conf.o 00:02:18.679 CC lib/idxd/idxd.o 00:02:18.679 CC lib/idxd/idxd_kernel.o 00:02:18.679 CC lib/idxd/idxd_user.o 00:02:18.679 CC lib/rdma_utils/rdma_utils.o 00:02:18.679 CC lib/env_dpdk/env.o 00:02:18.679 CC lib/env_dpdk/memory.o 00:02:18.679 CC lib/env_dpdk/pci.o 00:02:18.679 CC lib/env_dpdk/init.o 00:02:18.679 CC lib/env_dpdk/threads.o 00:02:18.679 CC lib/env_dpdk/pci_ioat.o 00:02:18.679 CC lib/env_dpdk/pci_virtio.o 00:02:18.679 CC lib/env_dpdk/pci_vmd.o 00:02:18.679 CC lib/env_dpdk/pci_idxd.o 00:02:18.679 CC lib/env_dpdk/pci_event.o 00:02:18.679 CC lib/env_dpdk/sigbus_handler.o 00:02:18.679 CC lib/env_dpdk/pci_dpdk.o 00:02:18.679 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:18.679 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:18.937 LIB libspdk_conf.a 00:02:18.938 SO libspdk_conf.so.6.0 00:02:18.938 LIB libspdk_rdma_utils.a 00:02:18.938 LIB libspdk_json.a 00:02:19.196 SO libspdk_rdma_utils.so.1.0 00:02:19.196 SYMLINK libspdk_conf.so 00:02:19.196 SO libspdk_json.so.6.0 00:02:19.196 SYMLINK libspdk_rdma_utils.so 00:02:19.196 SYMLINK libspdk_json.so 00:02:19.455 LIB libspdk_idxd.a 00:02:19.455 LIB libspdk_vmd.a 00:02:19.455 SO libspdk_idxd.so.12.1 00:02:19.455 SO libspdk_vmd.so.6.0 00:02:19.455 CC lib/rdma_provider/common.o 00:02:19.455 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:19.455 CC lib/jsonrpc/jsonrpc_server.o 00:02:19.455 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:19.455 CC lib/jsonrpc/jsonrpc_client.o 00:02:19.455 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:19.455 SYMLINK libspdk_idxd.so 00:02:19.455 SYMLINK libspdk_vmd.so 00:02:19.714 LIB libspdk_rdma_provider.a 00:02:19.714 SO libspdk_rdma_provider.so.7.0 00:02:19.714 LIB libspdk_jsonrpc.a 00:02:19.714 SO libspdk_jsonrpc.so.6.0 00:02:19.714 SYMLINK libspdk_rdma_provider.so 00:02:19.974 SYMLINK libspdk_jsonrpc.so 00:02:20.233 LIB libspdk_env_dpdk.a 00:02:20.233 CC lib/rpc/rpc.o 00:02:20.233 SO libspdk_env_dpdk.so.15.1 00:02:20.233 SYMLINK libspdk_env_dpdk.so 00:02:20.492 LIB libspdk_rpc.a 00:02:20.492 SO libspdk_rpc.so.6.0 00:02:20.492 SYMLINK libspdk_rpc.so 00:02:20.750 CC lib/keyring/keyring_rpc.o 00:02:20.750 CC lib/keyring/keyring.o 00:02:20.750 CC lib/trace/trace.o 00:02:20.750 CC lib/trace/trace_flags.o 00:02:20.750 CC lib/trace/trace_rpc.o 00:02:20.750 CC lib/notify/notify.o 00:02:20.750 CC lib/notify/notify_rpc.o 00:02:21.009 LIB libspdk_notify.a 00:02:21.009 SO libspdk_notify.so.6.0 00:02:21.009 LIB libspdk_keyring.a 00:02:21.009 LIB libspdk_trace.a 00:02:21.009 SO libspdk_keyring.so.2.0 00:02:21.009 SYMLINK libspdk_notify.so 00:02:21.009 SO libspdk_trace.so.11.0 00:02:21.009 SYMLINK libspdk_keyring.so 00:02:21.269 SYMLINK libspdk_trace.so 00:02:21.528 CC lib/sock/sock.o 00:02:21.528 CC lib/sock/sock_rpc.o 00:02:21.528 CC lib/thread/thread.o 00:02:21.528 CC lib/thread/iobuf.o 00:02:21.787 LIB libspdk_sock.a 00:02:21.787 SO libspdk_sock.so.10.0 00:02:22.046 SYMLINK libspdk_sock.so 00:02:22.305 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:22.305 CC lib/nvme/nvme_ctrlr.o 00:02:22.305 CC lib/nvme/nvme_fabric.o 00:02:22.305 CC lib/nvme/nvme_ns_cmd.o 00:02:22.305 CC lib/nvme/nvme_ns.o 00:02:22.305 CC lib/nvme/nvme_pcie_common.o 00:02:22.305 CC lib/nvme/nvme_pcie.o 00:02:22.305 CC lib/nvme/nvme_qpair.o 00:02:22.305 CC lib/nvme/nvme.o 00:02:22.305 CC lib/nvme/nvme_quirks.o 00:02:22.305 CC lib/nvme/nvme_transport.o 00:02:22.305 CC lib/nvme/nvme_discovery.o 00:02:22.305 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:22.305 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:22.305 CC lib/nvme/nvme_poll_group.o 00:02:22.305 CC lib/nvme/nvme_tcp.o 00:02:22.305 CC lib/nvme/nvme_opal.o 00:02:22.305 CC lib/nvme/nvme_io_msg.o 00:02:22.305 CC lib/nvme/nvme_zns.o 00:02:22.305 CC lib/nvme/nvme_stubs.o 00:02:22.305 CC lib/nvme/nvme_auth.o 00:02:22.305 CC lib/nvme/nvme_cuse.o 00:02:22.305 CC lib/nvme/nvme_rdma.o 00:02:22.872 LIB libspdk_thread.a 00:02:22.872 SO libspdk_thread.so.11.0 00:02:23.131 SYMLINK libspdk_thread.so 00:02:23.389 CC lib/virtio/virtio.o 00:02:23.389 CC lib/virtio/virtio_vhost_user.o 00:02:23.389 CC lib/accel/accel_rpc.o 00:02:23.389 CC lib/virtio/virtio_vfio_user.o 00:02:23.389 CC lib/accel/accel.o 00:02:23.389 CC lib/virtio/virtio_pci.o 00:02:23.389 CC lib/accel/accel_sw.o 00:02:23.389 CC lib/init/subsystem.o 00:02:23.389 CC lib/init/subsystem_rpc.o 00:02:23.389 CC lib/init/json_config.o 00:02:23.389 CC lib/fsdev/fsdev.o 00:02:23.389 CC lib/blob/blobstore.o 00:02:23.389 CC lib/blob/zeroes.o 00:02:23.389 CC lib/fsdev/fsdev_io.o 00:02:23.389 CC lib/init/rpc.o 00:02:23.389 CC lib/blob/request.o 00:02:23.389 CC lib/fsdev/fsdev_rpc.o 00:02:23.389 CC lib/blob/blob_bs_dev.o 00:02:23.647 LIB libspdk_init.a 00:02:23.648 SO libspdk_init.so.6.0 00:02:23.648 LIB libspdk_virtio.a 00:02:23.648 SYMLINK libspdk_init.so 00:02:23.648 SO libspdk_virtio.so.7.0 00:02:23.906 SYMLINK libspdk_virtio.so 00:02:23.906 LIB libspdk_fsdev.a 00:02:23.906 SO libspdk_fsdev.so.2.0 00:02:23.906 CC lib/event/app.o 00:02:23.906 CC lib/event/reactor.o 00:02:23.906 CC lib/event/log_rpc.o 00:02:23.906 CC lib/event/app_rpc.o 00:02:23.906 CC lib/event/scheduler_static.o 00:02:24.164 SYMLINK libspdk_fsdev.so 00:02:24.422 LIB libspdk_nvme.a 00:02:24.422 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:24.422 LIB libspdk_accel.a 00:02:24.422 SO libspdk_accel.so.16.0 00:02:24.422 SO libspdk_nvme.so.15.0 00:02:24.422 LIB libspdk_event.a 00:02:24.422 SO libspdk_event.so.14.0 00:02:24.422 SYMLINK libspdk_accel.so 00:02:24.681 SYMLINK libspdk_event.so 00:02:24.681 SYMLINK libspdk_nvme.so 00:02:24.940 CC lib/bdev/bdev_rpc.o 00:02:24.940 CC lib/bdev/bdev.o 00:02:24.940 CC lib/bdev/scsi_nvme.o 00:02:24.940 CC lib/bdev/bdev_zone.o 00:02:24.940 CC lib/bdev/part.o 00:02:24.940 LIB libspdk_fuse_dispatcher.a 00:02:24.940 SO libspdk_fuse_dispatcher.so.1.0 00:02:25.199 SYMLINK libspdk_fuse_dispatcher.so 00:02:26.577 LIB libspdk_blob.a 00:02:26.577 SO libspdk_blob.so.12.0 00:02:26.577 SYMLINK libspdk_blob.so 00:02:26.843 CC lib/blobfs/blobfs.o 00:02:26.843 CC lib/blobfs/tree.o 00:02:26.843 CC lib/lvol/lvol.o 00:02:27.415 LIB libspdk_bdev.a 00:02:27.415 SO libspdk_bdev.so.17.0 00:02:27.415 SYMLINK libspdk_bdev.so 00:02:27.675 LIB libspdk_blobfs.a 00:02:27.675 SO libspdk_blobfs.so.11.0 00:02:27.675 LIB libspdk_lvol.a 00:02:27.675 SYMLINK libspdk_blobfs.so 00:02:27.675 SO libspdk_lvol.so.11.0 00:02:27.675 CC lib/nvmf/ctrlr.o 00:02:27.675 CC lib/nvmf/ctrlr_discovery.o 00:02:27.675 CC lib/nvmf/ctrlr_bdev.o 00:02:27.675 CC lib/nvmf/subsystem.o 00:02:27.675 CC lib/nvmf/nvmf.o 00:02:27.675 CC lib/nvmf/nvmf_rpc.o 00:02:27.675 CC lib/nvmf/transport.o 00:02:27.675 CC lib/nvmf/tcp.o 00:02:27.675 CC lib/ftl/ftl_core.o 00:02:27.675 CC lib/nvmf/rdma.o 00:02:27.675 CC lib/nvmf/stubs.o 00:02:27.675 CC lib/ftl/ftl_init.o 00:02:27.675 CC lib/nvmf/mdns_server.o 00:02:27.675 CC lib/ftl/ftl_layout.o 00:02:27.675 CC lib/ftl/ftl_debug.o 00:02:27.675 CC lib/ublk/ublk.o 00:02:27.675 CC lib/ublk/ublk_rpc.o 00:02:27.675 CC lib/nvmf/auth.o 00:02:27.675 CC lib/ftl/ftl_sb.o 00:02:27.675 CC lib/ftl/ftl_io.o 00:02:27.675 CC lib/ftl/ftl_l2p.o 00:02:27.675 CC lib/ftl/ftl_l2p_flat.o 00:02:27.675 CC lib/ftl/ftl_nv_cache.o 00:02:27.675 CC lib/ftl/ftl_band.o 00:02:27.675 CC lib/scsi/dev.o 00:02:27.676 CC lib/scsi/lun.o 00:02:27.676 CC lib/ftl/ftl_band_ops.o 00:02:27.676 CC lib/ftl/ftl_writer.o 00:02:27.676 CC lib/scsi/port.o 00:02:27.676 CC lib/scsi/scsi.o 00:02:27.676 CC lib/ftl/ftl_rq.o 00:02:27.676 CC lib/scsi/scsi_pr.o 00:02:27.676 CC lib/scsi/scsi_bdev.o 00:02:27.676 CC lib/ftl/ftl_l2p_cache.o 00:02:27.676 CC lib/ftl/ftl_reloc.o 00:02:27.676 CC lib/ftl/ftl_p2l.o 00:02:27.676 CC lib/scsi/scsi_rpc.o 00:02:27.676 CC lib/scsi/task.o 00:02:27.676 CC lib/ftl/ftl_p2l_log.o 00:02:27.676 CC lib/ftl/mngt/ftl_mngt.o 00:02:27.676 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:27.676 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:27.676 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:27.676 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:27.676 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:27.676 CC lib/nbd/nbd.o 00:02:27.676 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:27.676 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:27.676 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:27.676 CC lib/nbd/nbd_rpc.o 00:02:27.676 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:27.676 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:27.676 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:27.676 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:27.676 CC lib/ftl/utils/ftl_conf.o 00:02:27.676 CC lib/ftl/utils/ftl_md.o 00:02:27.676 SYMLINK libspdk_lvol.so 00:02:27.676 CC lib/ftl/utils/ftl_mempool.o 00:02:27.676 CC lib/ftl/utils/ftl_property.o 00:02:27.676 CC lib/ftl/utils/ftl_bitmap.o 00:02:27.676 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:27.676 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:27.676 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:27.676 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:27.676 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:27.676 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:27.676 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:27.676 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:27.676 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:27.676 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:27.676 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:27.676 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:27.967 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:27.967 CC lib/ftl/base/ftl_base_dev.o 00:02:27.967 CC lib/ftl/ftl_trace.o 00:02:27.967 CC lib/ftl/base/ftl_base_bdev.o 00:02:28.307 LIB libspdk_nbd.a 00:02:28.593 SO libspdk_nbd.so.7.0 00:02:28.593 LIB libspdk_scsi.a 00:02:28.593 SYMLINK libspdk_nbd.so 00:02:28.593 SO libspdk_scsi.so.9.0 00:02:28.593 LIB libspdk_ublk.a 00:02:28.593 SO libspdk_ublk.so.3.0 00:02:28.593 SYMLINK libspdk_scsi.so 00:02:28.593 SYMLINK libspdk_ublk.so 00:02:29.158 CC lib/vhost/vhost.o 00:02:29.158 CC lib/vhost/vhost_rpc.o 00:02:29.158 CC lib/vhost/vhost_scsi.o 00:02:29.158 CC lib/vhost/rte_vhost_user.o 00:02:29.158 CC lib/vhost/vhost_blk.o 00:02:29.158 CC lib/iscsi/conn.o 00:02:29.158 CC lib/iscsi/init_grp.o 00:02:29.158 CC lib/iscsi/iscsi.o 00:02:29.158 CC lib/iscsi/param.o 00:02:29.158 CC lib/iscsi/portal_grp.o 00:02:29.158 CC lib/iscsi/tgt_node.o 00:02:29.159 CC lib/iscsi/iscsi_subsystem.o 00:02:29.159 CC lib/iscsi/iscsi_rpc.o 00:02:29.159 CC lib/iscsi/task.o 00:02:29.159 LIB libspdk_ftl.a 00:02:29.159 SO libspdk_ftl.so.9.0 00:02:29.416 SYMLINK libspdk_ftl.so 00:02:29.981 LIB libspdk_vhost.a 00:02:29.981 SO libspdk_vhost.so.8.0 00:02:29.981 SYMLINK libspdk_vhost.so 00:02:30.239 LIB libspdk_nvmf.a 00:02:30.239 SO libspdk_nvmf.so.20.0 00:02:30.239 LIB libspdk_iscsi.a 00:02:30.497 SYMLINK libspdk_nvmf.so 00:02:30.497 SO libspdk_iscsi.so.8.0 00:02:30.497 SYMLINK libspdk_iscsi.so 00:02:31.064 CC module/env_dpdk/env_dpdk_rpc.o 00:02:31.064 CC module/sock/posix/posix.o 00:02:31.322 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:31.322 CC module/accel/error/accel_error.o 00:02:31.322 CC module/accel/dsa/accel_dsa.o 00:02:31.322 CC module/accel/error/accel_error_rpc.o 00:02:31.322 CC module/accel/dsa/accel_dsa_rpc.o 00:02:31.322 LIB libspdk_env_dpdk_rpc.a 00:02:31.322 CC module/accel/iaa/accel_iaa.o 00:02:31.322 CC module/accel/iaa/accel_iaa_rpc.o 00:02:31.322 CC module/accel/ioat/accel_ioat.o 00:02:31.322 CC module/accel/ioat/accel_ioat_rpc.o 00:02:31.322 CC module/scheduler/gscheduler/gscheduler.o 00:02:31.322 CC module/fsdev/aio/fsdev_aio.o 00:02:31.322 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:31.322 CC module/fsdev/aio/linux_aio_mgr.o 00:02:31.322 CC module/keyring/file/keyring.o 00:02:31.322 CC module/keyring/file/keyring_rpc.o 00:02:31.322 CC module/blob/bdev/blob_bdev.o 00:02:31.322 CC module/keyring/linux/keyring_rpc.o 00:02:31.322 CC module/keyring/linux/keyring.o 00:02:31.322 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:31.322 SO libspdk_env_dpdk_rpc.so.6.0 00:02:31.322 SYMLINK libspdk_env_dpdk_rpc.so 00:02:31.322 LIB libspdk_keyring_linux.a 00:02:31.322 LIB libspdk_scheduler_dpdk_governor.a 00:02:31.322 LIB libspdk_scheduler_gscheduler.a 00:02:31.322 LIB libspdk_keyring_file.a 00:02:31.322 SO libspdk_keyring_linux.so.1.0 00:02:31.322 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:31.322 LIB libspdk_accel_ioat.a 00:02:31.322 SO libspdk_scheduler_gscheduler.so.4.0 00:02:31.322 SO libspdk_keyring_file.so.2.0 00:02:31.322 LIB libspdk_accel_iaa.a 00:02:31.322 LIB libspdk_accel_error.a 00:02:31.323 LIB libspdk_scheduler_dynamic.a 00:02:31.323 SO libspdk_accel_ioat.so.6.0 00:02:31.581 SYMLINK libspdk_keyring_linux.so 00:02:31.581 SO libspdk_accel_iaa.so.3.0 00:02:31.581 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:31.581 SYMLINK libspdk_scheduler_gscheduler.so 00:02:31.581 SO libspdk_accel_error.so.2.0 00:02:31.581 SYMLINK libspdk_keyring_file.so 00:02:31.581 SO libspdk_scheduler_dynamic.so.4.0 00:02:31.581 SYMLINK libspdk_accel_ioat.so 00:02:31.581 SYMLINK libspdk_accel_iaa.so 00:02:31.581 LIB libspdk_accel_dsa.a 00:02:31.581 LIB libspdk_blob_bdev.a 00:02:31.581 SYMLINK libspdk_accel_error.so 00:02:31.581 SO libspdk_accel_dsa.so.5.0 00:02:31.581 SYMLINK libspdk_scheduler_dynamic.so 00:02:31.581 SO libspdk_blob_bdev.so.12.0 00:02:31.581 SYMLINK libspdk_blob_bdev.so 00:02:31.581 SYMLINK libspdk_accel_dsa.so 00:02:31.840 LIB libspdk_sock_posix.a 00:02:31.840 LIB libspdk_fsdev_aio.a 00:02:31.840 SO libspdk_fsdev_aio.so.1.0 00:02:31.840 SO libspdk_sock_posix.so.6.0 00:02:32.099 SYMLINK libspdk_fsdev_aio.so 00:02:32.099 SYMLINK libspdk_sock_posix.so 00:02:32.099 CC module/bdev/delay/vbdev_delay.o 00:02:32.099 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:32.099 CC module/bdev/lvol/vbdev_lvol.o 00:02:32.099 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:32.099 CC module/bdev/error/vbdev_error_rpc.o 00:02:32.099 CC module/bdev/error/vbdev_error.o 00:02:32.099 CC module/bdev/null/bdev_null_rpc.o 00:02:32.099 CC module/bdev/null/bdev_null.o 00:02:32.099 CC module/bdev/ftl/bdev_ftl.o 00:02:32.099 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:32.099 CC module/blobfs/bdev/blobfs_bdev.o 00:02:32.099 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:32.099 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:32.099 CC module/bdev/aio/bdev_aio.o 00:02:32.099 CC module/bdev/passthru/vbdev_passthru.o 00:02:32.100 CC module/bdev/aio/bdev_aio_rpc.o 00:02:32.100 CC module/bdev/gpt/gpt.o 00:02:32.100 CC module/bdev/nvme/bdev_nvme.o 00:02:32.100 CC module/bdev/malloc/bdev_malloc.o 00:02:32.100 CC module/bdev/raid/bdev_raid.o 00:02:32.100 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:32.100 CC module/bdev/raid/bdev_raid_rpc.o 00:02:32.100 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:32.100 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:32.100 CC module/bdev/split/vbdev_split_rpc.o 00:02:32.100 CC module/bdev/nvme/nvme_rpc.o 00:02:32.100 CC module/bdev/gpt/vbdev_gpt.o 00:02:32.100 CC module/bdev/split/vbdev_split.o 00:02:32.100 CC module/bdev/nvme/bdev_mdns_client.o 00:02:32.100 CC module/bdev/raid/bdev_raid_sb.o 00:02:32.100 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:32.100 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:32.100 CC module/bdev/nvme/vbdev_opal.o 00:02:32.100 CC module/bdev/raid/raid1.o 00:02:32.100 CC module/bdev/raid/raid0.o 00:02:32.100 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:32.100 CC module/bdev/iscsi/bdev_iscsi.o 00:02:32.100 CC module/bdev/raid/concat.o 00:02:32.100 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:32.100 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:32.100 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:32.100 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:32.358 LIB libspdk_blobfs_bdev.a 00:02:32.358 SO libspdk_blobfs_bdev.so.6.0 00:02:32.358 LIB libspdk_bdev_split.a 00:02:32.358 SYMLINK libspdk_blobfs_bdev.so 00:02:32.358 SO libspdk_bdev_split.so.6.0 00:02:32.358 LIB libspdk_bdev_error.a 00:02:32.358 LIB libspdk_bdev_null.a 00:02:32.358 LIB libspdk_bdev_ftl.a 00:02:32.617 SO libspdk_bdev_null.so.6.0 00:02:32.617 LIB libspdk_bdev_gpt.a 00:02:32.617 SO libspdk_bdev_error.so.6.0 00:02:32.617 SO libspdk_bdev_ftl.so.6.0 00:02:32.617 LIB libspdk_bdev_passthru.a 00:02:32.617 SYMLINK libspdk_bdev_split.so 00:02:32.617 SO libspdk_bdev_gpt.so.6.0 00:02:32.617 SO libspdk_bdev_passthru.so.6.0 00:02:32.617 LIB libspdk_bdev_zone_block.a 00:02:32.617 LIB libspdk_bdev_delay.a 00:02:32.617 SYMLINK libspdk_bdev_null.so 00:02:32.617 LIB libspdk_bdev_aio.a 00:02:32.617 SYMLINK libspdk_bdev_error.so 00:02:32.617 SO libspdk_bdev_zone_block.so.6.0 00:02:32.617 SO libspdk_bdev_delay.so.6.0 00:02:32.617 SYMLINK libspdk_bdev_ftl.so 00:02:32.617 SO libspdk_bdev_aio.so.6.0 00:02:32.617 SYMLINK libspdk_bdev_gpt.so 00:02:32.617 LIB libspdk_bdev_iscsi.a 00:02:32.617 SYMLINK libspdk_bdev_passthru.so 00:02:32.617 LIB libspdk_bdev_malloc.a 00:02:32.617 SO libspdk_bdev_iscsi.so.6.0 00:02:32.617 SYMLINK libspdk_bdev_zone_block.so 00:02:32.617 SO libspdk_bdev_malloc.so.6.0 00:02:32.617 SYMLINK libspdk_bdev_delay.so 00:02:32.617 SYMLINK libspdk_bdev_aio.so 00:02:32.617 SYMLINK libspdk_bdev_iscsi.so 00:02:32.617 SYMLINK libspdk_bdev_malloc.so 00:02:32.617 LIB libspdk_bdev_lvol.a 00:02:32.617 LIB libspdk_bdev_virtio.a 00:02:32.875 SO libspdk_bdev_lvol.so.6.0 00:02:32.875 SO libspdk_bdev_virtio.so.6.0 00:02:32.875 SYMLINK libspdk_bdev_lvol.so 00:02:32.875 SYMLINK libspdk_bdev_virtio.so 00:02:33.135 LIB libspdk_bdev_raid.a 00:02:33.394 SO libspdk_bdev_raid.so.6.0 00:02:33.394 SYMLINK libspdk_bdev_raid.so 00:02:34.772 LIB libspdk_bdev_nvme.a 00:02:34.772 SO libspdk_bdev_nvme.so.7.1 00:02:34.772 SYMLINK libspdk_bdev_nvme.so 00:02:35.339 CC module/event/subsystems/iobuf/iobuf.o 00:02:35.339 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:35.339 CC module/event/subsystems/scheduler/scheduler.o 00:02:35.339 CC module/event/subsystems/sock/sock.o 00:02:35.339 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:35.339 CC module/event/subsystems/keyring/keyring.o 00:02:35.339 CC module/event/subsystems/fsdev/fsdev.o 00:02:35.339 CC module/event/subsystems/vmd/vmd.o 00:02:35.339 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:35.597 LIB libspdk_event_scheduler.a 00:02:35.597 LIB libspdk_event_vhost_blk.a 00:02:35.597 LIB libspdk_event_fsdev.a 00:02:35.597 LIB libspdk_event_keyring.a 00:02:35.597 LIB libspdk_event_sock.a 00:02:35.597 LIB libspdk_event_iobuf.a 00:02:35.597 LIB libspdk_event_vmd.a 00:02:35.597 SO libspdk_event_vhost_blk.so.3.0 00:02:35.597 SO libspdk_event_scheduler.so.4.0 00:02:35.597 SO libspdk_event_sock.so.5.0 00:02:35.597 SO libspdk_event_fsdev.so.1.0 00:02:35.597 SO libspdk_event_keyring.so.1.0 00:02:35.597 SO libspdk_event_iobuf.so.3.0 00:02:35.597 SO libspdk_event_vmd.so.6.0 00:02:35.597 SYMLINK libspdk_event_vhost_blk.so 00:02:35.597 SYMLINK libspdk_event_scheduler.so 00:02:35.598 SYMLINK libspdk_event_keyring.so 00:02:35.598 SYMLINK libspdk_event_fsdev.so 00:02:35.598 SYMLINK libspdk_event_sock.so 00:02:35.598 SYMLINK libspdk_event_iobuf.so 00:02:35.598 SYMLINK libspdk_event_vmd.so 00:02:35.856 CC module/event/subsystems/accel/accel.o 00:02:36.115 LIB libspdk_event_accel.a 00:02:36.115 SO libspdk_event_accel.so.6.0 00:02:36.115 SYMLINK libspdk_event_accel.so 00:02:36.374 CC module/event/subsystems/bdev/bdev.o 00:02:36.634 LIB libspdk_event_bdev.a 00:02:36.634 SO libspdk_event_bdev.so.6.0 00:02:36.634 SYMLINK libspdk_event_bdev.so 00:02:37.202 CC module/event/subsystems/scsi/scsi.o 00:02:37.202 CC module/event/subsystems/ublk/ublk.o 00:02:37.202 CC module/event/subsystems/nbd/nbd.o 00:02:37.202 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:37.202 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:37.202 LIB libspdk_event_ublk.a 00:02:37.202 SO libspdk_event_ublk.so.3.0 00:02:37.202 LIB libspdk_event_scsi.a 00:02:37.202 LIB libspdk_event_nbd.a 00:02:37.202 SO libspdk_event_scsi.so.6.0 00:02:37.202 SO libspdk_event_nbd.so.6.0 00:02:37.202 SYMLINK libspdk_event_ublk.so 00:02:37.202 SYMLINK libspdk_event_scsi.so 00:02:37.202 LIB libspdk_event_nvmf.a 00:02:37.202 SYMLINK libspdk_event_nbd.so 00:02:37.461 SO libspdk_event_nvmf.so.6.0 00:02:37.461 SYMLINK libspdk_event_nvmf.so 00:02:37.720 CC module/event/subsystems/iscsi/iscsi.o 00:02:37.720 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:37.720 LIB libspdk_event_vhost_scsi.a 00:02:37.720 LIB libspdk_event_iscsi.a 00:02:37.720 SO libspdk_event_vhost_scsi.so.3.0 00:02:37.720 SO libspdk_event_iscsi.so.6.0 00:02:37.979 SYMLINK libspdk_event_vhost_scsi.so 00:02:37.979 SYMLINK libspdk_event_iscsi.so 00:02:37.979 SO libspdk.so.6.0 00:02:37.979 SYMLINK libspdk.so 00:02:38.548 CC test/rpc_client/rpc_client_test.o 00:02:38.548 CC app/spdk_lspci/spdk_lspci.o 00:02:38.548 CC app/spdk_nvme_perf/perf.o 00:02:38.548 TEST_HEADER include/spdk/accel.h 00:02:38.548 TEST_HEADER include/spdk/assert.h 00:02:38.548 TEST_HEADER include/spdk/barrier.h 00:02:38.548 TEST_HEADER include/spdk/accel_module.h 00:02:38.548 TEST_HEADER include/spdk/bdev_module.h 00:02:38.548 TEST_HEADER include/spdk/bdev.h 00:02:38.548 TEST_HEADER include/spdk/bdev_zone.h 00:02:38.548 TEST_HEADER include/spdk/bit_array.h 00:02:38.548 TEST_HEADER include/spdk/base64.h 00:02:38.548 TEST_HEADER include/spdk/bit_pool.h 00:02:38.548 CC app/trace_record/trace_record.o 00:02:38.548 TEST_HEADER include/spdk/blobfs.h 00:02:38.548 TEST_HEADER include/spdk/blob_bdev.h 00:02:38.548 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:38.548 TEST_HEADER include/spdk/blob.h 00:02:38.548 CC app/spdk_nvme_discover/discovery_aer.o 00:02:38.548 TEST_HEADER include/spdk/conf.h 00:02:38.548 TEST_HEADER include/spdk/config.h 00:02:38.548 TEST_HEADER include/spdk/cpuset.h 00:02:38.548 CC app/spdk_nvme_identify/identify.o 00:02:38.548 TEST_HEADER include/spdk/crc32.h 00:02:38.548 TEST_HEADER include/spdk/crc16.h 00:02:38.548 TEST_HEADER include/spdk/crc64.h 00:02:38.548 TEST_HEADER include/spdk/dif.h 00:02:38.548 CXX app/trace/trace.o 00:02:38.548 TEST_HEADER include/spdk/dma.h 00:02:38.548 TEST_HEADER include/spdk/endian.h 00:02:38.548 TEST_HEADER include/spdk/env.h 00:02:38.548 TEST_HEADER include/spdk/env_dpdk.h 00:02:38.548 TEST_HEADER include/spdk/fd.h 00:02:38.548 TEST_HEADER include/spdk/fd_group.h 00:02:38.548 TEST_HEADER include/spdk/event.h 00:02:38.548 TEST_HEADER include/spdk/fsdev.h 00:02:38.548 CC app/spdk_top/spdk_top.o 00:02:38.548 TEST_HEADER include/spdk/file.h 00:02:38.548 TEST_HEADER include/spdk/ftl.h 00:02:38.548 TEST_HEADER include/spdk/fsdev_module.h 00:02:38.548 TEST_HEADER include/spdk/gpt_spec.h 00:02:38.548 TEST_HEADER include/spdk/hexlify.h 00:02:38.548 TEST_HEADER include/spdk/idxd.h 00:02:38.548 TEST_HEADER include/spdk/histogram_data.h 00:02:38.548 TEST_HEADER include/spdk/idxd_spec.h 00:02:38.548 TEST_HEADER include/spdk/ioat.h 00:02:38.548 TEST_HEADER include/spdk/ioat_spec.h 00:02:38.548 TEST_HEADER include/spdk/init.h 00:02:38.548 TEST_HEADER include/spdk/iscsi_spec.h 00:02:38.548 TEST_HEADER include/spdk/json.h 00:02:38.548 TEST_HEADER include/spdk/jsonrpc.h 00:02:38.548 TEST_HEADER include/spdk/keyring.h 00:02:38.548 TEST_HEADER include/spdk/likely.h 00:02:38.548 TEST_HEADER include/spdk/log.h 00:02:38.548 TEST_HEADER include/spdk/keyring_module.h 00:02:38.548 TEST_HEADER include/spdk/lvol.h 00:02:38.548 TEST_HEADER include/spdk/md5.h 00:02:38.548 TEST_HEADER include/spdk/memory.h 00:02:38.548 TEST_HEADER include/spdk/mmio.h 00:02:38.548 TEST_HEADER include/spdk/nbd.h 00:02:38.548 TEST_HEADER include/spdk/net.h 00:02:38.548 TEST_HEADER include/spdk/nvme.h 00:02:38.548 TEST_HEADER include/spdk/notify.h 00:02:38.548 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:38.548 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:38.548 TEST_HEADER include/spdk/nvme_spec.h 00:02:38.548 TEST_HEADER include/spdk/nvme_intel.h 00:02:38.548 TEST_HEADER include/spdk/nvme_zns.h 00:02:38.548 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:38.548 CC app/nvmf_tgt/nvmf_main.o 00:02:38.548 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:38.548 TEST_HEADER include/spdk/nvmf.h 00:02:38.548 TEST_HEADER include/spdk/nvmf_transport.h 00:02:38.548 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:38.548 TEST_HEADER include/spdk/nvmf_spec.h 00:02:38.548 TEST_HEADER include/spdk/opal_spec.h 00:02:38.548 TEST_HEADER include/spdk/pci_ids.h 00:02:38.548 TEST_HEADER include/spdk/opal.h 00:02:38.548 TEST_HEADER include/spdk/pipe.h 00:02:38.548 TEST_HEADER include/spdk/queue.h 00:02:38.548 TEST_HEADER include/spdk/scheduler.h 00:02:38.548 TEST_HEADER include/spdk/reduce.h 00:02:38.548 TEST_HEADER include/spdk/scsi_spec.h 00:02:38.548 TEST_HEADER include/spdk/scsi.h 00:02:38.548 TEST_HEADER include/spdk/rpc.h 00:02:38.548 TEST_HEADER include/spdk/sock.h 00:02:38.548 TEST_HEADER include/spdk/string.h 00:02:38.548 TEST_HEADER include/spdk/stdinc.h 00:02:38.548 TEST_HEADER include/spdk/trace.h 00:02:38.548 TEST_HEADER include/spdk/trace_parser.h 00:02:38.548 TEST_HEADER include/spdk/thread.h 00:02:38.548 CC app/spdk_dd/spdk_dd.o 00:02:38.548 TEST_HEADER include/spdk/tree.h 00:02:38.548 TEST_HEADER include/spdk/ublk.h 00:02:38.548 TEST_HEADER include/spdk/version.h 00:02:38.548 TEST_HEADER include/spdk/util.h 00:02:38.548 TEST_HEADER include/spdk/uuid.h 00:02:38.548 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:38.548 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:38.548 TEST_HEADER include/spdk/vhost.h 00:02:38.548 TEST_HEADER include/spdk/xor.h 00:02:38.548 CC app/iscsi_tgt/iscsi_tgt.o 00:02:38.548 TEST_HEADER include/spdk/vmd.h 00:02:38.548 CXX test/cpp_headers/accel.o 00:02:38.548 TEST_HEADER include/spdk/zipf.h 00:02:38.548 CXX test/cpp_headers/accel_module.o 00:02:38.548 CXX test/cpp_headers/barrier.o 00:02:38.548 CXX test/cpp_headers/assert.o 00:02:38.549 CC app/spdk_tgt/spdk_tgt.o 00:02:38.549 CXX test/cpp_headers/base64.o 00:02:38.549 CXX test/cpp_headers/bdev.o 00:02:38.549 CXX test/cpp_headers/bdev_module.o 00:02:38.549 CXX test/cpp_headers/bit_array.o 00:02:38.549 CXX test/cpp_headers/bdev_zone.o 00:02:38.549 CXX test/cpp_headers/bit_pool.o 00:02:38.549 CXX test/cpp_headers/blob_bdev.o 00:02:38.549 CXX test/cpp_headers/blobfs.o 00:02:38.549 CXX test/cpp_headers/blobfs_bdev.o 00:02:38.549 CXX test/cpp_headers/blob.o 00:02:38.549 CXX test/cpp_headers/conf.o 00:02:38.549 CXX test/cpp_headers/cpuset.o 00:02:38.549 CXX test/cpp_headers/config.o 00:02:38.549 CXX test/cpp_headers/crc16.o 00:02:38.549 CXX test/cpp_headers/crc64.o 00:02:38.549 CXX test/cpp_headers/crc32.o 00:02:38.549 CXX test/cpp_headers/dif.o 00:02:38.549 CXX test/cpp_headers/dma.o 00:02:38.549 CXX test/cpp_headers/env_dpdk.o 00:02:38.549 CXX test/cpp_headers/env.o 00:02:38.549 CXX test/cpp_headers/endian.o 00:02:38.549 CXX test/cpp_headers/event.o 00:02:38.549 CXX test/cpp_headers/fd_group.o 00:02:38.549 CXX test/cpp_headers/file.o 00:02:38.549 CXX test/cpp_headers/fsdev.o 00:02:38.549 CXX test/cpp_headers/fd.o 00:02:38.549 CXX test/cpp_headers/gpt_spec.o 00:02:38.549 CXX test/cpp_headers/fsdev_module.o 00:02:38.549 CXX test/cpp_headers/ftl.o 00:02:38.549 CXX test/cpp_headers/idxd.o 00:02:38.549 CXX test/cpp_headers/hexlify.o 00:02:38.549 CXX test/cpp_headers/histogram_data.o 00:02:38.549 CXX test/cpp_headers/init.o 00:02:38.549 CXX test/cpp_headers/idxd_spec.o 00:02:38.549 CXX test/cpp_headers/iscsi_spec.o 00:02:38.549 CXX test/cpp_headers/ioat.o 00:02:38.549 CXX test/cpp_headers/ioat_spec.o 00:02:38.549 CXX test/cpp_headers/jsonrpc.o 00:02:38.549 CXX test/cpp_headers/keyring.o 00:02:38.549 CXX test/cpp_headers/json.o 00:02:38.549 CXX test/cpp_headers/keyring_module.o 00:02:38.549 CXX test/cpp_headers/log.o 00:02:38.549 CXX test/cpp_headers/likely.o 00:02:38.549 CXX test/cpp_headers/lvol.o 00:02:38.549 CXX test/cpp_headers/md5.o 00:02:38.549 CXX test/cpp_headers/memory.o 00:02:38.549 CXX test/cpp_headers/net.o 00:02:38.549 CXX test/cpp_headers/nbd.o 00:02:38.549 CXX test/cpp_headers/mmio.o 00:02:38.549 CXX test/cpp_headers/notify.o 00:02:38.549 CXX test/cpp_headers/nvme.o 00:02:38.549 CXX test/cpp_headers/nvme_intel.o 00:02:38.549 CXX test/cpp_headers/nvme_ocssd.o 00:02:38.549 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:38.549 CXX test/cpp_headers/nvme_spec.o 00:02:38.549 CXX test/cpp_headers/nvmf_cmd.o 00:02:38.549 CXX test/cpp_headers/nvme_zns.o 00:02:38.549 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:38.549 CXX test/cpp_headers/nvmf.o 00:02:38.549 CXX test/cpp_headers/nvmf_spec.o 00:02:38.549 CXX test/cpp_headers/nvmf_transport.o 00:02:38.549 CXX test/cpp_headers/opal.o 00:02:38.549 CXX test/cpp_headers/opal_spec.o 00:02:38.549 CC test/env/memory/memory_ut.o 00:02:38.549 CC test/env/vtophys/vtophys.o 00:02:38.549 CXX test/cpp_headers/pci_ids.o 00:02:38.549 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:38.549 CC test/env/pci/pci_ut.o 00:02:38.549 CC test/dma/test_dma/test_dma.o 00:02:38.549 CC test/app/stub/stub.o 00:02:38.819 CC test/thread/poller_perf/poller_perf.o 00:02:38.819 CC examples/ioat/perf/perf.o 00:02:38.819 CC examples/ioat/verify/verify.o 00:02:38.819 CC examples/util/zipf/zipf.o 00:02:38.819 CC test/app/jsoncat/jsoncat.o 00:02:38.819 CC app/fio/nvme/fio_plugin.o 00:02:38.819 CC test/app/histogram_perf/histogram_perf.o 00:02:38.819 CC test/app/bdev_svc/bdev_svc.o 00:02:38.819 LINK spdk_lspci 00:02:38.819 CC app/fio/bdev/fio_plugin.o 00:02:38.819 LINK nvmf_tgt 00:02:39.084 LINK interrupt_tgt 00:02:39.084 LINK rpc_client_test 00:02:39.084 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:39.084 CC test/env/mem_callbacks/mem_callbacks.o 00:02:39.084 LINK spdk_nvme_discover 00:02:39.084 CXX test/cpp_headers/pipe.o 00:02:39.084 LINK jsoncat 00:02:39.084 CXX test/cpp_headers/queue.o 00:02:39.084 CXX test/cpp_headers/reduce.o 00:02:39.084 LINK poller_perf 00:02:39.084 CXX test/cpp_headers/rpc.o 00:02:39.084 CXX test/cpp_headers/scheduler.o 00:02:39.084 CXX test/cpp_headers/scsi.o 00:02:39.084 CXX test/cpp_headers/scsi_spec.o 00:02:39.084 CXX test/cpp_headers/sock.o 00:02:39.084 LINK histogram_perf 00:02:39.084 CXX test/cpp_headers/stdinc.o 00:02:39.084 CXX test/cpp_headers/thread.o 00:02:39.084 CXX test/cpp_headers/string.o 00:02:39.084 CXX test/cpp_headers/trace.o 00:02:39.084 CXX test/cpp_headers/trace_parser.o 00:02:39.345 LINK iscsi_tgt 00:02:39.345 CXX test/cpp_headers/tree.o 00:02:39.345 CXX test/cpp_headers/ublk.o 00:02:39.345 LINK vtophys 00:02:39.345 CXX test/cpp_headers/util.o 00:02:39.345 CXX test/cpp_headers/uuid.o 00:02:39.345 CXX test/cpp_headers/version.o 00:02:39.345 CXX test/cpp_headers/vfio_user_pci.o 00:02:39.345 CXX test/cpp_headers/vfio_user_spec.o 00:02:39.345 CXX test/cpp_headers/vhost.o 00:02:39.345 CXX test/cpp_headers/vmd.o 00:02:39.345 LINK spdk_trace_record 00:02:39.345 CXX test/cpp_headers/xor.o 00:02:39.345 CXX test/cpp_headers/zipf.o 00:02:39.345 LINK spdk_tgt 00:02:39.345 LINK env_dpdk_post_init 00:02:39.345 LINK zipf 00:02:39.345 LINK bdev_svc 00:02:39.345 LINK stub 00:02:39.345 LINK ioat_perf 00:02:39.345 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:39.345 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:39.345 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:39.345 LINK verify 00:02:39.345 LINK spdk_dd 00:02:39.604 LINK spdk_trace 00:02:39.604 LINK pci_ut 00:02:39.604 CC test/event/event_perf/event_perf.o 00:02:39.604 CC test/event/reactor_perf/reactor_perf.o 00:02:39.862 CC test/event/reactor/reactor.o 00:02:39.862 CC test/event/app_repeat/app_repeat.o 00:02:39.862 CC test/event/scheduler/scheduler.o 00:02:39.862 LINK test_dma 00:02:39.862 LINK nvme_fuzz 00:02:39.862 CC examples/sock/hello_world/hello_sock.o 00:02:39.862 LINK spdk_bdev 00:02:39.862 CC examples/vmd/lsvmd/lsvmd.o 00:02:39.862 CC examples/idxd/perf/perf.o 00:02:39.862 CC examples/vmd/led/led.o 00:02:39.862 CC examples/thread/thread/thread_ex.o 00:02:39.862 LINK spdk_nvme 00:02:39.862 LINK mem_callbacks 00:02:39.862 LINK event_perf 00:02:39.862 LINK reactor 00:02:39.862 LINK vhost_fuzz 00:02:39.862 LINK reactor_perf 00:02:39.862 LINK spdk_nvme_perf 00:02:39.862 CC app/vhost/vhost.o 00:02:39.862 LINK app_repeat 00:02:39.862 LINK spdk_nvme_identify 00:02:40.120 LINK lsvmd 00:02:40.120 LINK led 00:02:40.120 LINK scheduler 00:02:40.120 LINK spdk_top 00:02:40.120 LINK hello_sock 00:02:40.120 LINK vhost 00:02:40.120 LINK thread 00:02:40.120 LINK idxd_perf 00:02:40.379 CC test/nvme/err_injection/err_injection.o 00:02:40.379 CC test/nvme/e2edp/nvme_dp.o 00:02:40.379 CC test/nvme/overhead/overhead.o 00:02:40.379 CC test/nvme/sgl/sgl.o 00:02:40.379 CC test/nvme/cuse/cuse.o 00:02:40.379 CC test/nvme/reset/reset.o 00:02:40.379 CC test/nvme/startup/startup.o 00:02:40.379 CC test/nvme/compliance/nvme_compliance.o 00:02:40.379 CC test/nvme/reserve/reserve.o 00:02:40.379 CC test/nvme/fused_ordering/fused_ordering.o 00:02:40.379 CC test/nvme/simple_copy/simple_copy.o 00:02:40.379 CC test/nvme/aer/aer.o 00:02:40.379 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:40.379 CC test/nvme/connect_stress/connect_stress.o 00:02:40.379 CC test/nvme/boot_partition/boot_partition.o 00:02:40.379 CC test/nvme/fdp/fdp.o 00:02:40.379 CC test/blobfs/mkfs/mkfs.o 00:02:40.379 LINK memory_ut 00:02:40.379 CC test/accel/dif/dif.o 00:02:40.379 CC test/lvol/esnap/esnap.o 00:02:40.637 LINK err_injection 00:02:40.637 LINK startup 00:02:40.637 CC examples/nvme/arbitration/arbitration.o 00:02:40.637 LINK boot_partition 00:02:40.637 CC examples/nvme/hello_world/hello_world.o 00:02:40.637 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:40.637 CC examples/nvme/reconnect/reconnect.o 00:02:40.637 LINK fused_ordering 00:02:40.637 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:40.637 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:40.637 CC examples/nvme/hotplug/hotplug.o 00:02:40.637 LINK connect_stress 00:02:40.637 LINK doorbell_aers 00:02:40.637 CC examples/nvme/abort/abort.o 00:02:40.637 LINK reserve 00:02:40.637 LINK mkfs 00:02:40.637 LINK reset 00:02:40.637 LINK simple_copy 00:02:40.637 LINK sgl 00:02:40.637 LINK aer 00:02:40.637 LINK nvme_dp 00:02:40.637 LINK overhead 00:02:40.637 CC examples/accel/perf/accel_perf.o 00:02:40.637 CC examples/blob/cli/blobcli.o 00:02:40.637 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:40.637 CC examples/blob/hello_world/hello_blob.o 00:02:40.637 LINK fdp 00:02:40.637 LINK nvme_compliance 00:02:40.637 LINK pmr_persistence 00:02:40.895 LINK cmb_copy 00:02:40.895 LINK hello_world 00:02:40.895 LINK hotplug 00:02:40.895 LINK arbitration 00:02:40.895 LINK reconnect 00:02:40.895 LINK hello_blob 00:02:40.895 LINK abort 00:02:40.895 LINK hello_fsdev 00:02:41.153 LINK nvme_manage 00:02:41.153 LINK dif 00:02:41.153 LINK blobcli 00:02:41.153 LINK accel_perf 00:02:41.153 LINK iscsi_fuzz 00:02:41.721 LINK cuse 00:02:41.721 CC test/bdev/bdevio/bdevio.o 00:02:41.721 CC examples/bdev/hello_world/hello_bdev.o 00:02:41.721 CC examples/bdev/bdevperf/bdevperf.o 00:02:41.979 LINK hello_bdev 00:02:41.979 LINK bdevio 00:02:42.546 LINK bdevperf 00:02:43.113 CC examples/nvmf/nvmf/nvmf.o 00:02:43.372 LINK nvmf 00:02:45.275 LINK esnap 00:02:45.534 00:02:45.534 real 1m0.373s 00:02:45.534 user 8m53.122s 00:02:45.534 sys 3m36.596s 00:02:45.534 03:13:46 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:45.534 03:13:46 make -- common/autotest_common.sh@10 -- $ set +x 00:02:45.534 ************************************ 00:02:45.534 END TEST make 00:02:45.534 ************************************ 00:02:45.534 03:13:46 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:45.534 03:13:46 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:45.534 03:13:46 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:45.534 03:13:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.794 03:13:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:45.794 03:13:46 -- pm/common@44 -- $ pid=2373122 00:02:45.794 03:13:46 -- pm/common@50 -- $ kill -TERM 2373122 00:02:45.794 03:13:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.794 03:13:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:45.794 03:13:46 -- pm/common@44 -- $ pid=2373124 00:02:45.794 03:13:46 -- pm/common@50 -- $ kill -TERM 2373124 00:02:45.794 03:13:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.794 03:13:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:45.794 03:13:46 -- pm/common@44 -- $ pid=2373127 00:02:45.794 03:13:46 -- pm/common@50 -- $ kill -TERM 2373127 00:02:45.794 03:13:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.794 03:13:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:45.794 03:13:46 -- pm/common@44 -- $ pid=2373151 00:02:45.795 03:13:46 -- pm/common@50 -- $ sudo -E kill -TERM 2373151 00:02:45.795 03:13:46 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:45.795 03:13:46 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:45.795 03:13:46 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:02:45.795 03:13:46 -- common/autotest_common.sh@1711 -- # lcov --version 00:02:45.795 03:13:46 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:02:45.795 03:13:46 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:02:45.795 03:13:46 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:45.795 03:13:46 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:45.795 03:13:46 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:45.795 03:13:46 -- scripts/common.sh@336 -- # IFS=.-: 00:02:45.795 03:13:46 -- scripts/common.sh@336 -- # read -ra ver1 00:02:45.795 03:13:46 -- scripts/common.sh@337 -- # IFS=.-: 00:02:45.795 03:13:46 -- scripts/common.sh@337 -- # read -ra ver2 00:02:45.795 03:13:46 -- scripts/common.sh@338 -- # local 'op=<' 00:02:45.795 03:13:46 -- scripts/common.sh@340 -- # ver1_l=2 00:02:45.795 03:13:46 -- scripts/common.sh@341 -- # ver2_l=1 00:02:45.795 03:13:46 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:45.795 03:13:46 -- scripts/common.sh@344 -- # case "$op" in 00:02:45.795 03:13:46 -- scripts/common.sh@345 -- # : 1 00:02:45.795 03:13:46 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:45.795 03:13:46 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:45.795 03:13:46 -- scripts/common.sh@365 -- # decimal 1 00:02:45.795 03:13:46 -- scripts/common.sh@353 -- # local d=1 00:02:45.795 03:13:46 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:45.795 03:13:46 -- scripts/common.sh@355 -- # echo 1 00:02:45.795 03:13:46 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:45.795 03:13:46 -- scripts/common.sh@366 -- # decimal 2 00:02:45.795 03:13:46 -- scripts/common.sh@353 -- # local d=2 00:02:45.795 03:13:46 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:45.795 03:13:46 -- scripts/common.sh@355 -- # echo 2 00:02:45.795 03:13:46 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:45.795 03:13:46 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:45.795 03:13:46 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:45.795 03:13:46 -- scripts/common.sh@368 -- # return 0 00:02:45.795 03:13:46 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:45.795 03:13:46 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:02:45.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:45.795 --rc genhtml_branch_coverage=1 00:02:45.795 --rc genhtml_function_coverage=1 00:02:45.795 --rc genhtml_legend=1 00:02:45.795 --rc geninfo_all_blocks=1 00:02:45.795 --rc geninfo_unexecuted_blocks=1 00:02:45.795 00:02:45.795 ' 00:02:45.795 03:13:46 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:02:45.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:45.795 --rc genhtml_branch_coverage=1 00:02:45.795 --rc genhtml_function_coverage=1 00:02:45.795 --rc genhtml_legend=1 00:02:45.795 --rc geninfo_all_blocks=1 00:02:45.795 --rc geninfo_unexecuted_blocks=1 00:02:45.795 00:02:45.795 ' 00:02:45.795 03:13:46 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:02:45.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:45.795 --rc genhtml_branch_coverage=1 00:02:45.795 --rc genhtml_function_coverage=1 00:02:45.795 --rc genhtml_legend=1 00:02:45.795 --rc geninfo_all_blocks=1 00:02:45.795 --rc geninfo_unexecuted_blocks=1 00:02:45.795 00:02:45.795 ' 00:02:45.795 03:13:46 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:02:45.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:45.795 --rc genhtml_branch_coverage=1 00:02:45.795 --rc genhtml_function_coverage=1 00:02:45.795 --rc genhtml_legend=1 00:02:45.795 --rc geninfo_all_blocks=1 00:02:45.795 --rc geninfo_unexecuted_blocks=1 00:02:45.795 00:02:45.795 ' 00:02:45.795 03:13:46 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:45.795 03:13:46 -- nvmf/common.sh@7 -- # uname -s 00:02:45.795 03:13:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:45.795 03:13:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:45.795 03:13:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:45.795 03:13:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:45.795 03:13:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:45.795 03:13:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:45.795 03:13:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:45.795 03:13:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:45.795 03:13:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:45.795 03:13:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:45.795 03:13:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:02:45.795 03:13:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:02:45.795 03:13:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:45.795 03:13:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:45.795 03:13:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:45.795 03:13:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:45.795 03:13:46 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:45.795 03:13:46 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:45.795 03:13:46 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:45.795 03:13:46 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:45.795 03:13:46 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:45.795 03:13:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.795 03:13:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.795 03:13:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.795 03:13:46 -- paths/export.sh@5 -- # export PATH 00:02:45.795 03:13:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.795 03:13:46 -- nvmf/common.sh@51 -- # : 0 00:02:45.795 03:13:46 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:45.795 03:13:46 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:45.795 03:13:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:45.795 03:13:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:45.795 03:13:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:45.795 03:13:46 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:45.795 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:45.795 03:13:46 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:45.795 03:13:46 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:45.795 03:13:46 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:45.795 03:13:46 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:45.795 03:13:46 -- spdk/autotest.sh@32 -- # uname -s 00:02:45.795 03:13:46 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:45.795 03:13:46 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:45.795 03:13:46 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:45.795 03:13:46 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:45.795 03:13:46 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:45.795 03:13:46 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:45.795 03:13:46 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:45.795 03:13:46 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:45.795 03:13:46 -- spdk/autotest.sh@48 -- # udevadm_pid=2437434 00:02:45.795 03:13:46 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:45.795 03:13:46 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:45.795 03:13:46 -- pm/common@17 -- # local monitor 00:02:45.795 03:13:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.795 03:13:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.795 03:13:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.795 03:13:46 -- pm/common@21 -- # date +%s 00:02:45.795 03:13:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.795 03:13:46 -- pm/common@21 -- # date +%s 00:02:45.795 03:13:46 -- pm/common@25 -- # sleep 1 00:02:45.795 03:13:46 -- pm/common@21 -- # date +%s 00:02:45.795 03:13:46 -- pm/common@21 -- # date +%s 00:02:45.795 03:13:46 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734056026 00:02:45.795 03:13:46 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734056026 00:02:45.795 03:13:46 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734056026 00:02:45.795 03:13:46 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734056026 00:02:46.055 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734056026_collect-vmstat.pm.log 00:02:46.055 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734056026_collect-cpu-load.pm.log 00:02:46.055 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734056026_collect-bmc-pm.bmc.pm.log 00:02:46.055 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734056026_collect-cpu-temp.pm.log 00:02:46.992 03:13:47 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:46.992 03:13:47 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:46.992 03:13:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:46.992 03:13:47 -- common/autotest_common.sh@10 -- # set +x 00:02:46.992 03:13:47 -- spdk/autotest.sh@59 -- # create_test_list 00:02:46.992 03:13:47 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:46.992 03:13:47 -- common/autotest_common.sh@10 -- # set +x 00:02:46.992 03:13:48 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:46.992 03:13:48 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:46.992 03:13:48 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:46.992 03:13:48 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:46.992 03:13:48 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:46.992 03:13:48 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:46.992 03:13:48 -- common/autotest_common.sh@1457 -- # uname 00:02:46.992 03:13:48 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:46.992 03:13:48 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:46.992 03:13:48 -- common/autotest_common.sh@1477 -- # uname 00:02:46.992 03:13:48 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:46.992 03:13:48 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:46.992 03:13:48 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:46.992 lcov: LCOV version 1.15 00:02:46.992 03:13:48 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:05.080 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:05.080 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:11.651 03:14:12 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:11.651 03:14:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:11.651 03:14:12 -- common/autotest_common.sh@10 -- # set +x 00:03:11.651 03:14:12 -- spdk/autotest.sh@78 -- # rm -f 00:03:11.651 03:14:12 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:13.556 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:13.556 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:13.815 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:13.815 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:13.815 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:13.815 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:13.815 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:13.815 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:13.815 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:13.815 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:13.815 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:13.815 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:13.815 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:13.815 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:14.085 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:14.085 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:14.085 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:14.085 03:14:15 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:14.085 03:14:15 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:14.085 03:14:15 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:14.085 03:14:15 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:14.085 03:14:15 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:14.085 03:14:15 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:14.085 03:14:15 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:14.085 03:14:15 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:03:14.085 03:14:15 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:14.085 03:14:15 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:14.085 03:14:15 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:14.085 03:14:15 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:14.085 03:14:15 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:14.085 03:14:15 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:14.085 03:14:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:14.085 03:14:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:14.085 03:14:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:14.085 03:14:15 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:14.085 03:14:15 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:14.085 No valid GPT data, bailing 00:03:14.085 03:14:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:14.085 03:14:15 -- scripts/common.sh@394 -- # pt= 00:03:14.085 03:14:15 -- scripts/common.sh@395 -- # return 1 00:03:14.086 03:14:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:14.086 1+0 records in 00:03:14.086 1+0 records out 00:03:14.086 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00542147 s, 193 MB/s 00:03:14.086 03:14:15 -- spdk/autotest.sh@105 -- # sync 00:03:14.086 03:14:15 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:14.086 03:14:15 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:14.086 03:14:15 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:19.437 03:14:20 -- spdk/autotest.sh@111 -- # uname -s 00:03:19.437 03:14:20 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:19.437 03:14:20 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:19.437 03:14:20 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:21.966 Hugepages 00:03:21.966 node hugesize free / total 00:03:21.966 node0 1048576kB 0 / 0 00:03:21.967 node0 2048kB 0 / 0 00:03:21.967 node1 1048576kB 0 / 0 00:03:21.967 node1 2048kB 0 / 0 00:03:21.967 00:03:21.967 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:21.967 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:21.967 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:21.967 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:21.967 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:21.967 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:21.967 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:21.967 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:21.967 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:22.225 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:22.225 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:22.225 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:22.225 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:22.225 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:22.225 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:22.225 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:22.225 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:22.225 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:22.225 03:14:23 -- spdk/autotest.sh@117 -- # uname -s 00:03:22.225 03:14:23 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:22.225 03:14:23 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:22.225 03:14:23 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:25.510 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:25.510 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:25.510 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:25.510 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:25.510 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:25.510 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:25.510 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:25.510 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:25.510 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:25.510 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:25.510 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:25.510 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:25.510 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:25.510 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:25.510 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:25.510 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:26.076 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:26.076 03:14:27 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:27.009 03:14:28 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:27.009 03:14:28 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:27.009 03:14:28 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:27.009 03:14:28 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:27.009 03:14:28 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:27.009 03:14:28 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:27.009 03:14:28 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:27.009 03:14:28 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:27.009 03:14:28 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:27.009 03:14:28 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:27.009 03:14:28 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:27.009 03:14:28 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:29.536 Waiting for block devices as requested 00:03:29.536 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:29.536 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:29.794 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:29.794 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:29.794 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:29.794 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:30.053 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:30.053 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:30.053 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:30.311 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:30.311 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:30.311 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:30.311 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:30.570 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:30.570 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:30.570 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:30.829 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:30.829 03:14:31 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:30.829 03:14:31 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:30.829 03:14:31 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:30.829 03:14:31 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:03:30.829 03:14:31 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:30.829 03:14:31 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:30.829 03:14:31 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:30.829 03:14:31 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:30.829 03:14:31 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:30.829 03:14:31 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:30.829 03:14:31 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:30.829 03:14:31 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:30.829 03:14:31 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:30.829 03:14:31 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:03:30.829 03:14:31 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:30.829 03:14:31 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:30.829 03:14:31 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:30.829 03:14:31 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:30.829 03:14:31 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:30.829 03:14:31 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:30.829 03:14:31 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:30.829 03:14:31 -- common/autotest_common.sh@1543 -- # continue 00:03:30.829 03:14:31 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:30.829 03:14:31 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:30.829 03:14:31 -- common/autotest_common.sh@10 -- # set +x 00:03:30.829 03:14:31 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:30.829 03:14:31 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:30.829 03:14:31 -- common/autotest_common.sh@10 -- # set +x 00:03:30.829 03:14:31 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:34.112 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:34.112 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:34.112 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:34.112 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:34.112 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:34.112 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:34.112 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:34.112 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:34.112 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:34.112 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:34.112 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:34.112 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:34.112 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:34.112 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:34.112 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:34.112 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:34.679 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:34.679 03:14:35 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:34.679 03:14:35 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:34.679 03:14:35 -- common/autotest_common.sh@10 -- # set +x 00:03:34.679 03:14:35 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:34.679 03:14:35 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:34.679 03:14:35 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:34.679 03:14:35 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:34.679 03:14:35 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:34.679 03:14:35 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:34.679 03:14:35 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:34.679 03:14:35 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:34.679 03:14:35 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:34.679 03:14:35 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:34.679 03:14:35 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:34.679 03:14:35 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:34.679 03:14:35 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:34.937 03:14:35 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:34.937 03:14:35 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:34.937 03:14:35 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:34.937 03:14:35 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:34.937 03:14:35 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:34.937 03:14:35 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:34.937 03:14:35 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:34.937 03:14:35 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:34.937 03:14:35 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:03:34.937 03:14:35 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:03:34.937 03:14:35 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=2451348 00:03:34.937 03:14:35 -- common/autotest_common.sh@1585 -- # waitforlisten 2451348 00:03:34.937 03:14:35 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:34.937 03:14:35 -- common/autotest_common.sh@835 -- # '[' -z 2451348 ']' 00:03:34.937 03:14:35 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:34.937 03:14:35 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:34.937 03:14:35 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:34.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:34.937 03:14:35 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:34.937 03:14:35 -- common/autotest_common.sh@10 -- # set +x 00:03:34.938 [2024-12-13 03:14:36.021996] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:03:34.938 [2024-12-13 03:14:36.022084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2451348 ] 00:03:34.938 [2024-12-13 03:14:36.134362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:35.196 [2024-12-13 03:14:36.243249] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:03:36.131 03:14:37 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:36.131 03:14:37 -- common/autotest_common.sh@868 -- # return 0 00:03:36.131 03:14:37 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:36.131 03:14:37 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:36.131 03:14:37 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:39.414 nvme0n1 00:03:39.414 03:14:40 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:39.414 [2024-12-13 03:14:40.261151] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:39.414 [2024-12-13 03:14:40.261199] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:39.414 request: 00:03:39.414 { 00:03:39.414 "nvme_ctrlr_name": "nvme0", 00:03:39.414 "password": "test", 00:03:39.414 "method": "bdev_nvme_opal_revert", 00:03:39.414 "req_id": 1 00:03:39.414 } 00:03:39.414 Got JSON-RPC error response 00:03:39.414 response: 00:03:39.414 { 00:03:39.414 "code": -32603, 00:03:39.414 "message": "Internal error" 00:03:39.414 } 00:03:39.414 03:14:40 -- common/autotest_common.sh@1591 -- # true 00:03:39.414 03:14:40 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:39.414 03:14:40 -- common/autotest_common.sh@1595 -- # killprocess 2451348 00:03:39.414 03:14:40 -- common/autotest_common.sh@954 -- # '[' -z 2451348 ']' 00:03:39.414 03:14:40 -- common/autotest_common.sh@958 -- # kill -0 2451348 00:03:39.414 03:14:40 -- common/autotest_common.sh@959 -- # uname 00:03:39.414 03:14:40 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:39.414 03:14:40 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2451348 00:03:39.414 03:14:40 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:39.414 03:14:40 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:39.414 03:14:40 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2451348' 00:03:39.414 killing process with pid 2451348 00:03:39.414 03:14:40 -- common/autotest_common.sh@973 -- # kill 2451348 00:03:39.414 03:14:40 -- common/autotest_common.sh@978 -- # wait 2451348 00:03:42.696 03:14:43 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:42.696 03:14:43 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:42.696 03:14:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:42.696 03:14:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:42.696 03:14:43 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:42.696 03:14:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:42.696 03:14:43 -- common/autotest_common.sh@10 -- # set +x 00:03:42.696 03:14:43 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:42.696 03:14:43 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:42.696 03:14:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:42.696 03:14:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:42.696 03:14:43 -- common/autotest_common.sh@10 -- # set +x 00:03:42.954 ************************************ 00:03:42.954 START TEST env 00:03:42.954 ************************************ 00:03:42.954 03:14:43 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:42.954 * Looking for test storage... 00:03:42.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:42.954 03:14:44 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:42.954 03:14:44 env -- common/autotest_common.sh@1711 -- # lcov --version 00:03:42.954 03:14:44 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:42.954 03:14:44 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:42.954 03:14:44 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:42.954 03:14:44 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:42.954 03:14:44 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:42.954 03:14:44 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:42.954 03:14:44 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:42.954 03:14:44 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:42.954 03:14:44 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:42.954 03:14:44 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:42.954 03:14:44 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:42.954 03:14:44 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:42.954 03:14:44 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:42.954 03:14:44 env -- scripts/common.sh@344 -- # case "$op" in 00:03:42.954 03:14:44 env -- scripts/common.sh@345 -- # : 1 00:03:42.954 03:14:44 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:42.954 03:14:44 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:42.954 03:14:44 env -- scripts/common.sh@365 -- # decimal 1 00:03:42.954 03:14:44 env -- scripts/common.sh@353 -- # local d=1 00:03:42.954 03:14:44 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:42.954 03:14:44 env -- scripts/common.sh@355 -- # echo 1 00:03:42.954 03:14:44 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:42.954 03:14:44 env -- scripts/common.sh@366 -- # decimal 2 00:03:42.954 03:14:44 env -- scripts/common.sh@353 -- # local d=2 00:03:42.954 03:14:44 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:42.954 03:14:44 env -- scripts/common.sh@355 -- # echo 2 00:03:42.954 03:14:44 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:42.954 03:14:44 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:42.954 03:14:44 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:42.954 03:14:44 env -- scripts/common.sh@368 -- # return 0 00:03:42.954 03:14:44 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:42.954 03:14:44 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:42.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.955 --rc genhtml_branch_coverage=1 00:03:42.955 --rc genhtml_function_coverage=1 00:03:42.955 --rc genhtml_legend=1 00:03:42.955 --rc geninfo_all_blocks=1 00:03:42.955 --rc geninfo_unexecuted_blocks=1 00:03:42.955 00:03:42.955 ' 00:03:42.955 03:14:44 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:42.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.955 --rc genhtml_branch_coverage=1 00:03:42.955 --rc genhtml_function_coverage=1 00:03:42.955 --rc genhtml_legend=1 00:03:42.955 --rc geninfo_all_blocks=1 00:03:42.955 --rc geninfo_unexecuted_blocks=1 00:03:42.955 00:03:42.955 ' 00:03:42.955 03:14:44 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:42.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.955 --rc genhtml_branch_coverage=1 00:03:42.955 --rc genhtml_function_coverage=1 00:03:42.955 --rc genhtml_legend=1 00:03:42.955 --rc geninfo_all_blocks=1 00:03:42.955 --rc geninfo_unexecuted_blocks=1 00:03:42.955 00:03:42.955 ' 00:03:42.955 03:14:44 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:42.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.955 --rc genhtml_branch_coverage=1 00:03:42.955 --rc genhtml_function_coverage=1 00:03:42.955 --rc genhtml_legend=1 00:03:42.955 --rc geninfo_all_blocks=1 00:03:42.955 --rc geninfo_unexecuted_blocks=1 00:03:42.955 00:03:42.955 ' 00:03:42.955 03:14:44 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:42.955 03:14:44 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:42.955 03:14:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:42.955 03:14:44 env -- common/autotest_common.sh@10 -- # set +x 00:03:42.955 ************************************ 00:03:42.955 START TEST env_memory 00:03:42.955 ************************************ 00:03:42.955 03:14:44 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:42.955 00:03:42.955 00:03:42.955 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.955 http://cunit.sourceforge.net/ 00:03:42.955 00:03:42.955 00:03:42.955 Suite: memory 00:03:43.212 Test: alloc and free memory map ...[2024-12-13 03:14:44.189937] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:43.212 passed 00:03:43.212 Test: mem map translation ...[2024-12-13 03:14:44.229876] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:43.212 [2024-12-13 03:14:44.229900] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:43.212 [2024-12-13 03:14:44.229949] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:43.212 [2024-12-13 03:14:44.229980] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:43.212 passed 00:03:43.212 Test: mem map registration ...[2024-12-13 03:14:44.291529] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:43.212 [2024-12-13 03:14:44.291551] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:43.212 passed 00:03:43.212 Test: mem map adjacent registrations ...passed 00:03:43.212 00:03:43.212 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.212 suites 1 1 n/a 0 0 00:03:43.212 tests 4 4 4 0 0 00:03:43.212 asserts 152 152 152 0 n/a 00:03:43.212 00:03:43.212 Elapsed time = 0.226 seconds 00:03:43.212 00:03:43.212 real 0m0.261s 00:03:43.212 user 0m0.243s 00:03:43.212 sys 0m0.017s 00:03:43.212 03:14:44 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:43.212 03:14:44 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:43.212 ************************************ 00:03:43.212 END TEST env_memory 00:03:43.212 ************************************ 00:03:43.470 03:14:44 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:43.470 03:14:44 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:43.470 03:14:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:43.470 03:14:44 env -- common/autotest_common.sh@10 -- # set +x 00:03:43.470 ************************************ 00:03:43.470 START TEST env_vtophys 00:03:43.470 ************************************ 00:03:43.470 03:14:44 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:43.470 EAL: lib.eal log level changed from notice to debug 00:03:43.470 EAL: Detected lcore 0 as core 0 on socket 0 00:03:43.470 EAL: Detected lcore 1 as core 1 on socket 0 00:03:43.470 EAL: Detected lcore 2 as core 2 on socket 0 00:03:43.471 EAL: Detected lcore 3 as core 3 on socket 0 00:03:43.471 EAL: Detected lcore 4 as core 4 on socket 0 00:03:43.471 EAL: Detected lcore 5 as core 5 on socket 0 00:03:43.471 EAL: Detected lcore 6 as core 6 on socket 0 00:03:43.471 EAL: Detected lcore 7 as core 8 on socket 0 00:03:43.471 EAL: Detected lcore 8 as core 9 on socket 0 00:03:43.471 EAL: Detected lcore 9 as core 10 on socket 0 00:03:43.471 EAL: Detected lcore 10 as core 11 on socket 0 00:03:43.471 EAL: Detected lcore 11 as core 12 on socket 0 00:03:43.471 EAL: Detected lcore 12 as core 13 on socket 0 00:03:43.471 EAL: Detected lcore 13 as core 16 on socket 0 00:03:43.471 EAL: Detected lcore 14 as core 17 on socket 0 00:03:43.471 EAL: Detected lcore 15 as core 18 on socket 0 00:03:43.471 EAL: Detected lcore 16 as core 19 on socket 0 00:03:43.471 EAL: Detected lcore 17 as core 20 on socket 0 00:03:43.471 EAL: Detected lcore 18 as core 21 on socket 0 00:03:43.471 EAL: Detected lcore 19 as core 25 on socket 0 00:03:43.471 EAL: Detected lcore 20 as core 26 on socket 0 00:03:43.471 EAL: Detected lcore 21 as core 27 on socket 0 00:03:43.471 EAL: Detected lcore 22 as core 28 on socket 0 00:03:43.471 EAL: Detected lcore 23 as core 29 on socket 0 00:03:43.471 EAL: Detected lcore 24 as core 0 on socket 1 00:03:43.471 EAL: Detected lcore 25 as core 1 on socket 1 00:03:43.471 EAL: Detected lcore 26 as core 2 on socket 1 00:03:43.471 EAL: Detected lcore 27 as core 3 on socket 1 00:03:43.471 EAL: Detected lcore 28 as core 4 on socket 1 00:03:43.471 EAL: Detected lcore 29 as core 5 on socket 1 00:03:43.471 EAL: Detected lcore 30 as core 6 on socket 1 00:03:43.471 EAL: Detected lcore 31 as core 8 on socket 1 00:03:43.471 EAL: Detected lcore 32 as core 9 on socket 1 00:03:43.471 EAL: Detected lcore 33 as core 10 on socket 1 00:03:43.471 EAL: Detected lcore 34 as core 11 on socket 1 00:03:43.471 EAL: Detected lcore 35 as core 12 on socket 1 00:03:43.471 EAL: Detected lcore 36 as core 13 on socket 1 00:03:43.471 EAL: Detected lcore 37 as core 16 on socket 1 00:03:43.471 EAL: Detected lcore 38 as core 17 on socket 1 00:03:43.471 EAL: Detected lcore 39 as core 18 on socket 1 00:03:43.471 EAL: Detected lcore 40 as core 19 on socket 1 00:03:43.471 EAL: Detected lcore 41 as core 20 on socket 1 00:03:43.471 EAL: Detected lcore 42 as core 21 on socket 1 00:03:43.471 EAL: Detected lcore 43 as core 25 on socket 1 00:03:43.471 EAL: Detected lcore 44 as core 26 on socket 1 00:03:43.471 EAL: Detected lcore 45 as core 27 on socket 1 00:03:43.471 EAL: Detected lcore 46 as core 28 on socket 1 00:03:43.471 EAL: Detected lcore 47 as core 29 on socket 1 00:03:43.471 EAL: Detected lcore 48 as core 0 on socket 0 00:03:43.471 EAL: Detected lcore 49 as core 1 on socket 0 00:03:43.471 EAL: Detected lcore 50 as core 2 on socket 0 00:03:43.471 EAL: Detected lcore 51 as core 3 on socket 0 00:03:43.471 EAL: Detected lcore 52 as core 4 on socket 0 00:03:43.471 EAL: Detected lcore 53 as core 5 on socket 0 00:03:43.471 EAL: Detected lcore 54 as core 6 on socket 0 00:03:43.471 EAL: Detected lcore 55 as core 8 on socket 0 00:03:43.471 EAL: Detected lcore 56 as core 9 on socket 0 00:03:43.471 EAL: Detected lcore 57 as core 10 on socket 0 00:03:43.471 EAL: Detected lcore 58 as core 11 on socket 0 00:03:43.471 EAL: Detected lcore 59 as core 12 on socket 0 00:03:43.471 EAL: Detected lcore 60 as core 13 on socket 0 00:03:43.471 EAL: Detected lcore 61 as core 16 on socket 0 00:03:43.471 EAL: Detected lcore 62 as core 17 on socket 0 00:03:43.471 EAL: Detected lcore 63 as core 18 on socket 0 00:03:43.471 EAL: Detected lcore 64 as core 19 on socket 0 00:03:43.471 EAL: Detected lcore 65 as core 20 on socket 0 00:03:43.471 EAL: Detected lcore 66 as core 21 on socket 0 00:03:43.471 EAL: Detected lcore 67 as core 25 on socket 0 00:03:43.471 EAL: Detected lcore 68 as core 26 on socket 0 00:03:43.471 EAL: Detected lcore 69 as core 27 on socket 0 00:03:43.471 EAL: Detected lcore 70 as core 28 on socket 0 00:03:43.471 EAL: Detected lcore 71 as core 29 on socket 0 00:03:43.471 EAL: Detected lcore 72 as core 0 on socket 1 00:03:43.471 EAL: Detected lcore 73 as core 1 on socket 1 00:03:43.471 EAL: Detected lcore 74 as core 2 on socket 1 00:03:43.471 EAL: Detected lcore 75 as core 3 on socket 1 00:03:43.471 EAL: Detected lcore 76 as core 4 on socket 1 00:03:43.471 EAL: Detected lcore 77 as core 5 on socket 1 00:03:43.471 EAL: Detected lcore 78 as core 6 on socket 1 00:03:43.471 EAL: Detected lcore 79 as core 8 on socket 1 00:03:43.471 EAL: Detected lcore 80 as core 9 on socket 1 00:03:43.471 EAL: Detected lcore 81 as core 10 on socket 1 00:03:43.471 EAL: Detected lcore 82 as core 11 on socket 1 00:03:43.471 EAL: Detected lcore 83 as core 12 on socket 1 00:03:43.471 EAL: Detected lcore 84 as core 13 on socket 1 00:03:43.471 EAL: Detected lcore 85 as core 16 on socket 1 00:03:43.471 EAL: Detected lcore 86 as core 17 on socket 1 00:03:43.471 EAL: Detected lcore 87 as core 18 on socket 1 00:03:43.471 EAL: Detected lcore 88 as core 19 on socket 1 00:03:43.471 EAL: Detected lcore 89 as core 20 on socket 1 00:03:43.471 EAL: Detected lcore 90 as core 21 on socket 1 00:03:43.471 EAL: Detected lcore 91 as core 25 on socket 1 00:03:43.471 EAL: Detected lcore 92 as core 26 on socket 1 00:03:43.471 EAL: Detected lcore 93 as core 27 on socket 1 00:03:43.471 EAL: Detected lcore 94 as core 28 on socket 1 00:03:43.471 EAL: Detected lcore 95 as core 29 on socket 1 00:03:43.471 EAL: Maximum logical cores by configuration: 128 00:03:43.471 EAL: Detected CPU lcores: 96 00:03:43.471 EAL: Detected NUMA nodes: 2 00:03:43.471 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:43.471 EAL: Detected shared linkage of DPDK 00:03:43.471 EAL: No shared files mode enabled, IPC will be disabled 00:03:43.471 EAL: Bus pci wants IOVA as 'DC' 00:03:43.471 EAL: Buses did not request a specific IOVA mode. 00:03:43.471 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:43.471 EAL: Selected IOVA mode 'VA' 00:03:43.471 EAL: Probing VFIO support... 00:03:43.471 EAL: IOMMU type 1 (Type 1) is supported 00:03:43.471 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:43.471 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:43.471 EAL: VFIO support initialized 00:03:43.471 EAL: Ask a virtual area of 0x2e000 bytes 00:03:43.471 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:43.471 EAL: Setting up physically contiguous memory... 00:03:43.471 EAL: Setting maximum number of open files to 524288 00:03:43.471 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:43.471 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:43.471 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:43.471 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.471 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:43.471 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:43.471 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.471 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:43.471 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:43.471 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.471 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:43.471 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:43.471 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.471 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:43.471 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:43.471 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.471 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:43.471 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:43.471 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.471 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:43.471 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:43.471 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.471 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:43.471 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:43.471 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.471 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:43.471 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:43.471 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:43.471 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.471 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:43.471 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:43.471 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.471 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:43.471 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:43.471 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.471 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:43.471 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:43.471 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.471 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:43.471 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:43.471 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.471 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:43.471 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:43.471 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.471 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:43.471 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:43.471 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.471 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:43.471 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:43.471 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.471 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:43.471 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:43.471 EAL: Hugepages will be freed exactly as allocated. 00:03:43.471 EAL: No shared files mode enabled, IPC is disabled 00:03:43.471 EAL: No shared files mode enabled, IPC is disabled 00:03:43.471 EAL: TSC frequency is ~2100000 KHz 00:03:43.471 EAL: Main lcore 0 is ready (tid=7f2da87bfa40;cpuset=[0]) 00:03:43.471 EAL: Trying to obtain current memory policy. 00:03:43.471 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.471 EAL: Restoring previous memory policy: 0 00:03:43.471 EAL: request: mp_malloc_sync 00:03:43.471 EAL: No shared files mode enabled, IPC is disabled 00:03:43.471 EAL: Heap on socket 0 was expanded by 2MB 00:03:43.471 EAL: No shared files mode enabled, IPC is disabled 00:03:43.471 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:43.471 EAL: Mem event callback 'spdk:(nil)' registered 00:03:43.471 00:03:43.471 00:03:43.471 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.471 http://cunit.sourceforge.net/ 00:03:43.471 00:03:43.471 00:03:43.471 Suite: components_suite 00:03:43.729 Test: vtophys_malloc_test ...passed 00:03:43.729 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:43.729 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.729 EAL: Restoring previous memory policy: 4 00:03:43.729 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.729 EAL: request: mp_malloc_sync 00:03:43.729 EAL: No shared files mode enabled, IPC is disabled 00:03:43.729 EAL: Heap on socket 0 was expanded by 4MB 00:03:43.729 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.729 EAL: request: mp_malloc_sync 00:03:43.729 EAL: No shared files mode enabled, IPC is disabled 00:03:43.729 EAL: Heap on socket 0 was shrunk by 4MB 00:03:43.729 EAL: Trying to obtain current memory policy. 00:03:43.729 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.729 EAL: Restoring previous memory policy: 4 00:03:43.729 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.729 EAL: request: mp_malloc_sync 00:03:43.729 EAL: No shared files mode enabled, IPC is disabled 00:03:43.729 EAL: Heap on socket 0 was expanded by 6MB 00:03:43.729 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.729 EAL: request: mp_malloc_sync 00:03:43.729 EAL: No shared files mode enabled, IPC is disabled 00:03:43.729 EAL: Heap on socket 0 was shrunk by 6MB 00:03:43.729 EAL: Trying to obtain current memory policy. 00:03:43.729 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.729 EAL: Restoring previous memory policy: 4 00:03:43.729 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.729 EAL: request: mp_malloc_sync 00:03:43.729 EAL: No shared files mode enabled, IPC is disabled 00:03:43.729 EAL: Heap on socket 0 was expanded by 10MB 00:03:43.729 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.729 EAL: request: mp_malloc_sync 00:03:43.729 EAL: No shared files mode enabled, IPC is disabled 00:03:43.729 EAL: Heap on socket 0 was shrunk by 10MB 00:03:43.990 EAL: Trying to obtain current memory policy. 00:03:43.990 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.990 EAL: Restoring previous memory policy: 4 00:03:43.990 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.990 EAL: request: mp_malloc_sync 00:03:43.990 EAL: No shared files mode enabled, IPC is disabled 00:03:43.990 EAL: Heap on socket 0 was expanded by 18MB 00:03:43.990 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.990 EAL: request: mp_malloc_sync 00:03:43.990 EAL: No shared files mode enabled, IPC is disabled 00:03:43.990 EAL: Heap on socket 0 was shrunk by 18MB 00:03:43.990 EAL: Trying to obtain current memory policy. 00:03:43.990 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.990 EAL: Restoring previous memory policy: 4 00:03:43.990 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.990 EAL: request: mp_malloc_sync 00:03:43.990 EAL: No shared files mode enabled, IPC is disabled 00:03:43.990 EAL: Heap on socket 0 was expanded by 34MB 00:03:43.990 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.990 EAL: request: mp_malloc_sync 00:03:43.990 EAL: No shared files mode enabled, IPC is disabled 00:03:43.990 EAL: Heap on socket 0 was shrunk by 34MB 00:03:43.990 EAL: Trying to obtain current memory policy. 00:03:43.990 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.990 EAL: Restoring previous memory policy: 4 00:03:43.990 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.990 EAL: request: mp_malloc_sync 00:03:43.990 EAL: No shared files mode enabled, IPC is disabled 00:03:43.990 EAL: Heap on socket 0 was expanded by 66MB 00:03:44.248 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.248 EAL: request: mp_malloc_sync 00:03:44.248 EAL: No shared files mode enabled, IPC is disabled 00:03:44.248 EAL: Heap on socket 0 was shrunk by 66MB 00:03:44.248 EAL: Trying to obtain current memory policy. 00:03:44.248 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.248 EAL: Restoring previous memory policy: 4 00:03:44.248 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.248 EAL: request: mp_malloc_sync 00:03:44.248 EAL: No shared files mode enabled, IPC is disabled 00:03:44.248 EAL: Heap on socket 0 was expanded by 130MB 00:03:44.505 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.505 EAL: request: mp_malloc_sync 00:03:44.505 EAL: No shared files mode enabled, IPC is disabled 00:03:44.505 EAL: Heap on socket 0 was shrunk by 130MB 00:03:44.768 EAL: Trying to obtain current memory policy. 00:03:44.768 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.768 EAL: Restoring previous memory policy: 4 00:03:44.768 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.768 EAL: request: mp_malloc_sync 00:03:44.768 EAL: No shared files mode enabled, IPC is disabled 00:03:44.768 EAL: Heap on socket 0 was expanded by 258MB 00:03:45.335 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.335 EAL: request: mp_malloc_sync 00:03:45.335 EAL: No shared files mode enabled, IPC is disabled 00:03:45.335 EAL: Heap on socket 0 was shrunk by 258MB 00:03:45.593 EAL: Trying to obtain current memory policy. 00:03:45.593 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:45.851 EAL: Restoring previous memory policy: 4 00:03:45.851 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.851 EAL: request: mp_malloc_sync 00:03:45.851 EAL: No shared files mode enabled, IPC is disabled 00:03:45.851 EAL: Heap on socket 0 was expanded by 514MB 00:03:46.785 EAL: Calling mem event callback 'spdk:(nil)' 00:03:46.785 EAL: request: mp_malloc_sync 00:03:46.785 EAL: No shared files mode enabled, IPC is disabled 00:03:46.785 EAL: Heap on socket 0 was shrunk by 514MB 00:03:47.352 EAL: Trying to obtain current memory policy. 00:03:47.352 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:47.610 EAL: Restoring previous memory policy: 4 00:03:47.610 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.610 EAL: request: mp_malloc_sync 00:03:47.610 EAL: No shared files mode enabled, IPC is disabled 00:03:47.610 EAL: Heap on socket 0 was expanded by 1026MB 00:03:49.511 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.511 EAL: request: mp_malloc_sync 00:03:49.511 EAL: No shared files mode enabled, IPC is disabled 00:03:49.511 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:51.413 passed 00:03:51.413 00:03:51.413 Run Summary: Type Total Ran Passed Failed Inactive 00:03:51.413 suites 1 1 n/a 0 0 00:03:51.413 tests 2 2 2 0 0 00:03:51.413 asserts 497 497 497 0 n/a 00:03:51.413 00:03:51.413 Elapsed time = 7.646 seconds 00:03:51.413 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.413 EAL: request: mp_malloc_sync 00:03:51.413 EAL: No shared files mode enabled, IPC is disabled 00:03:51.413 EAL: Heap on socket 0 was shrunk by 2MB 00:03:51.413 EAL: No shared files mode enabled, IPC is disabled 00:03:51.413 EAL: No shared files mode enabled, IPC is disabled 00:03:51.413 EAL: No shared files mode enabled, IPC is disabled 00:03:51.413 00:03:51.413 real 0m7.874s 00:03:51.413 user 0m7.073s 00:03:51.413 sys 0m0.752s 00:03:51.413 03:14:52 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:51.413 03:14:52 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:51.413 ************************************ 00:03:51.413 END TEST env_vtophys 00:03:51.413 ************************************ 00:03:51.413 03:14:52 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:51.413 03:14:52 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:51.413 03:14:52 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.413 03:14:52 env -- common/autotest_common.sh@10 -- # set +x 00:03:51.413 ************************************ 00:03:51.413 START TEST env_pci 00:03:51.413 ************************************ 00:03:51.414 03:14:52 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:51.414 00:03:51.414 00:03:51.414 CUnit - A unit testing framework for C - Version 2.1-3 00:03:51.414 http://cunit.sourceforge.net/ 00:03:51.414 00:03:51.414 00:03:51.414 Suite: pci 00:03:51.414 Test: pci_hook ...[2024-12-13 03:14:52.418347] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2454171 has claimed it 00:03:51.414 EAL: Cannot find device (10000:00:01.0) 00:03:51.414 EAL: Failed to attach device on primary process 00:03:51.414 passed 00:03:51.414 00:03:51.414 Run Summary: Type Total Ran Passed Failed Inactive 00:03:51.414 suites 1 1 n/a 0 0 00:03:51.414 tests 1 1 1 0 0 00:03:51.414 asserts 25 25 25 0 n/a 00:03:51.414 00:03:51.414 Elapsed time = 0.045 seconds 00:03:51.414 00:03:51.414 real 0m0.119s 00:03:51.414 user 0m0.049s 00:03:51.414 sys 0m0.070s 00:03:51.414 03:14:52 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:51.414 03:14:52 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:51.414 ************************************ 00:03:51.414 END TEST env_pci 00:03:51.414 ************************************ 00:03:51.414 03:14:52 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:51.414 03:14:52 env -- env/env.sh@15 -- # uname 00:03:51.414 03:14:52 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:51.414 03:14:52 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:51.414 03:14:52 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:51.414 03:14:52 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:51.414 03:14:52 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.414 03:14:52 env -- common/autotest_common.sh@10 -- # set +x 00:03:51.414 ************************************ 00:03:51.414 START TEST env_dpdk_post_init 00:03:51.414 ************************************ 00:03:51.414 03:14:52 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:51.414 EAL: Detected CPU lcores: 96 00:03:51.414 EAL: Detected NUMA nodes: 2 00:03:51.414 EAL: Detected shared linkage of DPDK 00:03:51.673 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:51.673 EAL: Selected IOVA mode 'VA' 00:03:51.673 EAL: VFIO support initialized 00:03:51.673 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:51.673 EAL: Using IOMMU type 1 (Type 1) 00:03:51.673 EAL: Ignore mapping IO port bar(1) 00:03:51.673 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:51.673 EAL: Ignore mapping IO port bar(1) 00:03:51.673 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:51.673 EAL: Ignore mapping IO port bar(1) 00:03:51.673 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:51.673 EAL: Ignore mapping IO port bar(1) 00:03:51.673 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:51.673 EAL: Ignore mapping IO port bar(1) 00:03:51.673 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:51.673 EAL: Ignore mapping IO port bar(1) 00:03:51.673 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:51.673 EAL: Ignore mapping IO port bar(1) 00:03:51.673 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:51.673 EAL: Ignore mapping IO port bar(1) 00:03:51.673 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:52.609 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:03:52.609 EAL: Ignore mapping IO port bar(1) 00:03:52.609 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:52.609 EAL: Ignore mapping IO port bar(1) 00:03:52.609 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:52.609 EAL: Ignore mapping IO port bar(1) 00:03:52.609 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:52.609 EAL: Ignore mapping IO port bar(1) 00:03:52.609 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:52.609 EAL: Ignore mapping IO port bar(1) 00:03:52.609 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:52.609 EAL: Ignore mapping IO port bar(1) 00:03:52.609 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:52.609 EAL: Ignore mapping IO port bar(1) 00:03:52.609 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:52.609 EAL: Ignore mapping IO port bar(1) 00:03:52.609 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:03:55.893 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:03:55.893 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:03:55.893 Starting DPDK initialization... 00:03:55.893 Starting SPDK post initialization... 00:03:55.893 SPDK NVMe probe 00:03:55.893 Attaching to 0000:5e:00.0 00:03:55.893 Attached to 0000:5e:00.0 00:03:55.893 Cleaning up... 00:03:55.893 00:03:55.893 real 0m4.489s 00:03:55.893 user 0m3.076s 00:03:55.893 sys 0m0.482s 00:03:55.893 03:14:57 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:55.893 03:14:57 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:55.893 ************************************ 00:03:55.893 END TEST env_dpdk_post_init 00:03:55.893 ************************************ 00:03:55.893 03:14:57 env -- env/env.sh@26 -- # uname 00:03:55.893 03:14:57 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:55.893 03:14:57 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:55.893 03:14:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:55.893 03:14:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:55.893 03:14:57 env -- common/autotest_common.sh@10 -- # set +x 00:03:56.151 ************************************ 00:03:56.151 START TEST env_mem_callbacks 00:03:56.151 ************************************ 00:03:56.151 03:14:57 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:56.151 EAL: Detected CPU lcores: 96 00:03:56.151 EAL: Detected NUMA nodes: 2 00:03:56.151 EAL: Detected shared linkage of DPDK 00:03:56.151 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:56.151 EAL: Selected IOVA mode 'VA' 00:03:56.151 EAL: VFIO support initialized 00:03:56.151 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:56.151 00:03:56.151 00:03:56.151 CUnit - A unit testing framework for C - Version 2.1-3 00:03:56.151 http://cunit.sourceforge.net/ 00:03:56.151 00:03:56.151 00:03:56.151 Suite: memory 00:03:56.151 Test: test ... 00:03:56.151 register 0x200000200000 2097152 00:03:56.151 malloc 3145728 00:03:56.151 register 0x200000400000 4194304 00:03:56.151 buf 0x2000004fffc0 len 3145728 PASSED 00:03:56.151 malloc 64 00:03:56.151 buf 0x2000004ffec0 len 64 PASSED 00:03:56.151 malloc 4194304 00:03:56.151 register 0x200000800000 6291456 00:03:56.151 buf 0x2000009fffc0 len 4194304 PASSED 00:03:56.151 free 0x2000004fffc0 3145728 00:03:56.151 free 0x2000004ffec0 64 00:03:56.151 unregister 0x200000400000 4194304 PASSED 00:03:56.151 free 0x2000009fffc0 4194304 00:03:56.151 unregister 0x200000800000 6291456 PASSED 00:03:56.152 malloc 8388608 00:03:56.152 register 0x200000400000 10485760 00:03:56.152 buf 0x2000005fffc0 len 8388608 PASSED 00:03:56.152 free 0x2000005fffc0 8388608 00:03:56.152 unregister 0x200000400000 10485760 PASSED 00:03:56.152 passed 00:03:56.152 00:03:56.152 Run Summary: Type Total Ran Passed Failed Inactive 00:03:56.152 suites 1 1 n/a 0 0 00:03:56.152 tests 1 1 1 0 0 00:03:56.152 asserts 15 15 15 0 n/a 00:03:56.152 00:03:56.152 Elapsed time = 0.068 seconds 00:03:56.152 00:03:56.152 real 0m0.170s 00:03:56.152 user 0m0.097s 00:03:56.152 sys 0m0.073s 00:03:56.152 03:14:57 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:56.152 03:14:57 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:56.152 ************************************ 00:03:56.152 END TEST env_mem_callbacks 00:03:56.152 ************************************ 00:03:56.152 00:03:56.152 real 0m13.402s 00:03:56.152 user 0m10.748s 00:03:56.152 sys 0m1.702s 00:03:56.152 03:14:57 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:56.152 03:14:57 env -- common/autotest_common.sh@10 -- # set +x 00:03:56.152 ************************************ 00:03:56.152 END TEST env 00:03:56.152 ************************************ 00:03:56.152 03:14:57 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:56.152 03:14:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:56.152 03:14:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:56.152 03:14:57 -- common/autotest_common.sh@10 -- # set +x 00:03:56.411 ************************************ 00:03:56.411 START TEST rpc 00:03:56.411 ************************************ 00:03:56.411 03:14:57 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:56.411 * Looking for test storage... 00:03:56.411 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:56.411 03:14:57 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:56.411 03:14:57 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:56.411 03:14:57 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:56.411 03:14:57 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:56.411 03:14:57 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:56.411 03:14:57 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:56.411 03:14:57 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:56.411 03:14:57 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:56.411 03:14:57 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:56.411 03:14:57 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:56.411 03:14:57 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:56.411 03:14:57 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:56.411 03:14:57 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:56.411 03:14:57 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:56.411 03:14:57 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:56.411 03:14:57 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:56.411 03:14:57 rpc -- scripts/common.sh@345 -- # : 1 00:03:56.411 03:14:57 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:56.411 03:14:57 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:56.411 03:14:57 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:56.411 03:14:57 rpc -- scripts/common.sh@353 -- # local d=1 00:03:56.411 03:14:57 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:56.411 03:14:57 rpc -- scripts/common.sh@355 -- # echo 1 00:03:56.411 03:14:57 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:56.411 03:14:57 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:56.411 03:14:57 rpc -- scripts/common.sh@353 -- # local d=2 00:03:56.411 03:14:57 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:56.411 03:14:57 rpc -- scripts/common.sh@355 -- # echo 2 00:03:56.411 03:14:57 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:56.411 03:14:57 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:56.411 03:14:57 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:56.411 03:14:57 rpc -- scripts/common.sh@368 -- # return 0 00:03:56.411 03:14:57 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:56.411 03:14:57 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:56.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.411 --rc genhtml_branch_coverage=1 00:03:56.411 --rc genhtml_function_coverage=1 00:03:56.411 --rc genhtml_legend=1 00:03:56.411 --rc geninfo_all_blocks=1 00:03:56.411 --rc geninfo_unexecuted_blocks=1 00:03:56.411 00:03:56.411 ' 00:03:56.411 03:14:57 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:56.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.411 --rc genhtml_branch_coverage=1 00:03:56.411 --rc genhtml_function_coverage=1 00:03:56.411 --rc genhtml_legend=1 00:03:56.411 --rc geninfo_all_blocks=1 00:03:56.411 --rc geninfo_unexecuted_blocks=1 00:03:56.411 00:03:56.411 ' 00:03:56.411 03:14:57 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:56.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.411 --rc genhtml_branch_coverage=1 00:03:56.411 --rc genhtml_function_coverage=1 00:03:56.411 --rc genhtml_legend=1 00:03:56.411 --rc geninfo_all_blocks=1 00:03:56.411 --rc geninfo_unexecuted_blocks=1 00:03:56.411 00:03:56.411 ' 00:03:56.411 03:14:57 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:56.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.411 --rc genhtml_branch_coverage=1 00:03:56.411 --rc genhtml_function_coverage=1 00:03:56.411 --rc genhtml_legend=1 00:03:56.411 --rc geninfo_all_blocks=1 00:03:56.411 --rc geninfo_unexecuted_blocks=1 00:03:56.411 00:03:56.411 ' 00:03:56.411 03:14:57 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2455198 00:03:56.411 03:14:57 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:56.411 03:14:57 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2455198 00:03:56.411 03:14:57 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:56.411 03:14:57 rpc -- common/autotest_common.sh@835 -- # '[' -z 2455198 ']' 00:03:56.411 03:14:57 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:56.411 03:14:57 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:56.411 03:14:57 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:56.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:56.411 03:14:57 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:56.411 03:14:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.411 [2024-12-13 03:14:57.613711] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:03:56.411 [2024-12-13 03:14:57.613818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2455198 ] 00:03:56.670 [2024-12-13 03:14:57.719120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.670 [2024-12-13 03:14:57.823604] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:56.670 [2024-12-13 03:14:57.823648] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2455198' to capture a snapshot of events at runtime. 00:03:56.670 [2024-12-13 03:14:57.823659] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:56.670 [2024-12-13 03:14:57.823684] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:56.670 [2024-12-13 03:14:57.823693] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2455198 for offline analysis/debug. 00:03:56.670 [2024-12-13 03:14:57.825001] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.710 03:14:58 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:57.710 03:14:58 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:57.710 03:14:58 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:57.710 03:14:58 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:57.710 03:14:58 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:57.710 03:14:58 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:57.710 03:14:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.710 03:14:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.710 03:14:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.710 ************************************ 00:03:57.710 START TEST rpc_integrity 00:03:57.710 ************************************ 00:03:57.710 03:14:58 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:57.710 03:14:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:57.710 03:14:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.710 03:14:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.710 03:14:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.710 03:14:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:57.710 03:14:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:57.710 03:14:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:57.710 03:14:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:57.710 03:14:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.710 03:14:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.710 03:14:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.710 03:14:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:57.710 03:14:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:57.710 03:14:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.710 03:14:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.710 03:14:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.710 03:14:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:57.710 { 00:03:57.710 "name": "Malloc0", 00:03:57.710 "aliases": [ 00:03:57.710 "436bca7c-5acb-48b4-8e6e-32c49a387485" 00:03:57.710 ], 00:03:57.710 "product_name": "Malloc disk", 00:03:57.710 "block_size": 512, 00:03:57.710 "num_blocks": 16384, 00:03:57.710 "uuid": "436bca7c-5acb-48b4-8e6e-32c49a387485", 00:03:57.710 "assigned_rate_limits": { 00:03:57.710 "rw_ios_per_sec": 0, 00:03:57.710 "rw_mbytes_per_sec": 0, 00:03:57.710 "r_mbytes_per_sec": 0, 00:03:57.710 "w_mbytes_per_sec": 0 00:03:57.710 }, 00:03:57.710 "claimed": false, 00:03:57.710 "zoned": false, 00:03:57.710 "supported_io_types": { 00:03:57.710 "read": true, 00:03:57.710 "write": true, 00:03:57.710 "unmap": true, 00:03:57.710 "flush": true, 00:03:57.710 "reset": true, 00:03:57.710 "nvme_admin": false, 00:03:57.710 "nvme_io": false, 00:03:57.710 "nvme_io_md": false, 00:03:57.710 "write_zeroes": true, 00:03:57.710 "zcopy": true, 00:03:57.710 "get_zone_info": false, 00:03:57.710 "zone_management": false, 00:03:57.710 "zone_append": false, 00:03:57.710 "compare": false, 00:03:57.710 "compare_and_write": false, 00:03:57.710 "abort": true, 00:03:57.710 "seek_hole": false, 00:03:57.710 "seek_data": false, 00:03:57.710 "copy": true, 00:03:57.710 "nvme_iov_md": false 00:03:57.710 }, 00:03:57.710 "memory_domains": [ 00:03:57.710 { 00:03:57.710 "dma_device_id": "system", 00:03:57.710 "dma_device_type": 1 00:03:57.710 }, 00:03:57.710 { 00:03:57.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.710 "dma_device_type": 2 00:03:57.710 } 00:03:57.710 ], 00:03:57.710 "driver_specific": {} 00:03:57.710 } 00:03:57.710 ]' 00:03:57.710 03:14:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:57.710 03:14:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:57.710 03:14:58 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:57.710 03:14:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.710 03:14:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.710 [2024-12-13 03:14:58.787399] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:57.710 [2024-12-13 03:14:58.787445] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:57.710 [2024-12-13 03:14:58.787468] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000021c80 00:03:57.711 [2024-12-13 03:14:58.787478] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:57.711 [2024-12-13 03:14:58.789441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:57.711 [2024-12-13 03:14:58.789466] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:57.711 Passthru0 00:03:57.711 03:14:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.711 03:14:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:57.711 03:14:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.711 03:14:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.711 03:14:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.711 03:14:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:57.711 { 00:03:57.711 "name": "Malloc0", 00:03:57.711 "aliases": [ 00:03:57.711 "436bca7c-5acb-48b4-8e6e-32c49a387485" 00:03:57.711 ], 00:03:57.711 "product_name": "Malloc disk", 00:03:57.711 "block_size": 512, 00:03:57.711 "num_blocks": 16384, 00:03:57.711 "uuid": "436bca7c-5acb-48b4-8e6e-32c49a387485", 00:03:57.711 "assigned_rate_limits": { 00:03:57.711 "rw_ios_per_sec": 0, 00:03:57.711 "rw_mbytes_per_sec": 0, 00:03:57.711 "r_mbytes_per_sec": 0, 00:03:57.711 "w_mbytes_per_sec": 0 00:03:57.711 }, 00:03:57.711 "claimed": true, 00:03:57.711 "claim_type": "exclusive_write", 00:03:57.711 "zoned": false, 00:03:57.711 "supported_io_types": { 00:03:57.711 "read": true, 00:03:57.711 "write": true, 00:03:57.711 "unmap": true, 00:03:57.711 "flush": true, 00:03:57.711 "reset": true, 00:03:57.711 "nvme_admin": false, 00:03:57.711 "nvme_io": false, 00:03:57.711 "nvme_io_md": false, 00:03:57.711 "write_zeroes": true, 00:03:57.711 "zcopy": true, 00:03:57.711 "get_zone_info": false, 00:03:57.711 "zone_management": false, 00:03:57.711 "zone_append": false, 00:03:57.711 "compare": false, 00:03:57.711 "compare_and_write": false, 00:03:57.711 "abort": true, 00:03:57.711 "seek_hole": false, 00:03:57.711 "seek_data": false, 00:03:57.711 "copy": true, 00:03:57.711 "nvme_iov_md": false 00:03:57.711 }, 00:03:57.711 "memory_domains": [ 00:03:57.711 { 00:03:57.711 "dma_device_id": "system", 00:03:57.711 "dma_device_type": 1 00:03:57.711 }, 00:03:57.711 { 00:03:57.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.711 "dma_device_type": 2 00:03:57.711 } 00:03:57.711 ], 00:03:57.711 "driver_specific": {} 00:03:57.711 }, 00:03:57.711 { 00:03:57.711 "name": "Passthru0", 00:03:57.711 "aliases": [ 00:03:57.711 "8ebb0b47-74da-51df-9b16-d5c8a4ea5c3b" 00:03:57.711 ], 00:03:57.711 "product_name": "passthru", 00:03:57.711 "block_size": 512, 00:03:57.711 "num_blocks": 16384, 00:03:57.711 "uuid": "8ebb0b47-74da-51df-9b16-d5c8a4ea5c3b", 00:03:57.711 "assigned_rate_limits": { 00:03:57.711 "rw_ios_per_sec": 0, 00:03:57.711 "rw_mbytes_per_sec": 0, 00:03:57.711 "r_mbytes_per_sec": 0, 00:03:57.711 "w_mbytes_per_sec": 0 00:03:57.711 }, 00:03:57.711 "claimed": false, 00:03:57.711 "zoned": false, 00:03:57.711 "supported_io_types": { 00:03:57.711 "read": true, 00:03:57.711 "write": true, 00:03:57.711 "unmap": true, 00:03:57.711 "flush": true, 00:03:57.711 "reset": true, 00:03:57.711 "nvme_admin": false, 00:03:57.711 "nvme_io": false, 00:03:57.711 "nvme_io_md": false, 00:03:57.711 "write_zeroes": true, 00:03:57.711 "zcopy": true, 00:03:57.711 "get_zone_info": false, 00:03:57.711 "zone_management": false, 00:03:57.711 "zone_append": false, 00:03:57.711 "compare": false, 00:03:57.711 "compare_and_write": false, 00:03:57.711 "abort": true, 00:03:57.711 "seek_hole": false, 00:03:57.711 "seek_data": false, 00:03:57.711 "copy": true, 00:03:57.711 "nvme_iov_md": false 00:03:57.711 }, 00:03:57.711 "memory_domains": [ 00:03:57.711 { 00:03:57.711 "dma_device_id": "system", 00:03:57.711 "dma_device_type": 1 00:03:57.711 }, 00:03:57.711 { 00:03:57.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.711 "dma_device_type": 2 00:03:57.711 } 00:03:57.711 ], 00:03:57.711 "driver_specific": { 00:03:57.711 "passthru": { 00:03:57.711 "name": "Passthru0", 00:03:57.711 "base_bdev_name": "Malloc0" 00:03:57.711 } 00:03:57.711 } 00:03:57.711 } 00:03:57.711 ]' 00:03:57.711 03:14:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:57.711 03:14:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:57.711 03:14:58 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:57.711 03:14:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.711 03:14:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.711 03:14:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.711 03:14:58 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:57.711 03:14:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.711 03:14:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.711 03:14:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.711 03:14:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:57.711 03:14:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.711 03:14:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.711 03:14:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.711 03:14:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:57.711 03:14:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:58.007 03:14:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:58.007 00:03:58.007 real 0m0.268s 00:03:58.007 user 0m0.151s 00:03:58.007 sys 0m0.026s 00:03:58.007 03:14:58 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.007 03:14:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.007 ************************************ 00:03:58.007 END TEST rpc_integrity 00:03:58.007 ************************************ 00:03:58.007 03:14:58 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:58.007 03:14:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:58.007 03:14:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.007 03:14:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.007 ************************************ 00:03:58.007 START TEST rpc_plugins 00:03:58.007 ************************************ 00:03:58.007 03:14:58 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:58.007 03:14:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:58.007 03:14:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.007 03:14:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:58.007 03:14:59 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.008 03:14:59 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:58.008 03:14:59 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:58.008 03:14:59 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.008 03:14:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:58.008 03:14:59 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.008 03:14:59 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:58.008 { 00:03:58.008 "name": "Malloc1", 00:03:58.008 "aliases": [ 00:03:58.008 "d737e282-c40e-4546-9b50-2ae763a5cce9" 00:03:58.008 ], 00:03:58.008 "product_name": "Malloc disk", 00:03:58.008 "block_size": 4096, 00:03:58.008 "num_blocks": 256, 00:03:58.008 "uuid": "d737e282-c40e-4546-9b50-2ae763a5cce9", 00:03:58.008 "assigned_rate_limits": { 00:03:58.008 "rw_ios_per_sec": 0, 00:03:58.008 "rw_mbytes_per_sec": 0, 00:03:58.008 "r_mbytes_per_sec": 0, 00:03:58.008 "w_mbytes_per_sec": 0 00:03:58.008 }, 00:03:58.008 "claimed": false, 00:03:58.008 "zoned": false, 00:03:58.008 "supported_io_types": { 00:03:58.008 "read": true, 00:03:58.008 "write": true, 00:03:58.008 "unmap": true, 00:03:58.008 "flush": true, 00:03:58.008 "reset": true, 00:03:58.008 "nvme_admin": false, 00:03:58.008 "nvme_io": false, 00:03:58.008 "nvme_io_md": false, 00:03:58.008 "write_zeroes": true, 00:03:58.008 "zcopy": true, 00:03:58.008 "get_zone_info": false, 00:03:58.008 "zone_management": false, 00:03:58.008 "zone_append": false, 00:03:58.008 "compare": false, 00:03:58.008 "compare_and_write": false, 00:03:58.008 "abort": true, 00:03:58.008 "seek_hole": false, 00:03:58.008 "seek_data": false, 00:03:58.008 "copy": true, 00:03:58.008 "nvme_iov_md": false 00:03:58.008 }, 00:03:58.008 "memory_domains": [ 00:03:58.008 { 00:03:58.008 "dma_device_id": "system", 00:03:58.008 "dma_device_type": 1 00:03:58.008 }, 00:03:58.008 { 00:03:58.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:58.008 "dma_device_type": 2 00:03:58.008 } 00:03:58.008 ], 00:03:58.008 "driver_specific": {} 00:03:58.008 } 00:03:58.008 ]' 00:03:58.008 03:14:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:58.008 03:14:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:58.008 03:14:59 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:58.008 03:14:59 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.008 03:14:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:58.008 03:14:59 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.008 03:14:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:58.008 03:14:59 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.008 03:14:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:58.008 03:14:59 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.008 03:14:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:58.008 03:14:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:58.008 03:14:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:58.008 00:03:58.008 real 0m0.131s 00:03:58.008 user 0m0.072s 00:03:58.008 sys 0m0.018s 00:03:58.008 03:14:59 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.008 03:14:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:58.008 ************************************ 00:03:58.008 END TEST rpc_plugins 00:03:58.008 ************************************ 00:03:58.008 03:14:59 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:58.008 03:14:59 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:58.008 03:14:59 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.008 03:14:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.008 ************************************ 00:03:58.008 START TEST rpc_trace_cmd_test 00:03:58.008 ************************************ 00:03:58.008 03:14:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:58.008 03:14:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:58.008 03:14:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:58.008 03:14:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.008 03:14:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:58.008 03:14:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.008 03:14:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:58.008 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2455198", 00:03:58.008 "tpoint_group_mask": "0x8", 00:03:58.008 "iscsi_conn": { 00:03:58.008 "mask": "0x2", 00:03:58.008 "tpoint_mask": "0x0" 00:03:58.008 }, 00:03:58.008 "scsi": { 00:03:58.008 "mask": "0x4", 00:03:58.008 "tpoint_mask": "0x0" 00:03:58.008 }, 00:03:58.008 "bdev": { 00:03:58.008 "mask": "0x8", 00:03:58.008 "tpoint_mask": "0xffffffffffffffff" 00:03:58.008 }, 00:03:58.008 "nvmf_rdma": { 00:03:58.008 "mask": "0x10", 00:03:58.008 "tpoint_mask": "0x0" 00:03:58.008 }, 00:03:58.008 "nvmf_tcp": { 00:03:58.008 "mask": "0x20", 00:03:58.008 "tpoint_mask": "0x0" 00:03:58.008 }, 00:03:58.008 "ftl": { 00:03:58.008 "mask": "0x40", 00:03:58.008 "tpoint_mask": "0x0" 00:03:58.008 }, 00:03:58.008 "blobfs": { 00:03:58.008 "mask": "0x80", 00:03:58.008 "tpoint_mask": "0x0" 00:03:58.008 }, 00:03:58.008 "dsa": { 00:03:58.008 "mask": "0x200", 00:03:58.008 "tpoint_mask": "0x0" 00:03:58.008 }, 00:03:58.008 "thread": { 00:03:58.008 "mask": "0x400", 00:03:58.008 "tpoint_mask": "0x0" 00:03:58.008 }, 00:03:58.008 "nvme_pcie": { 00:03:58.008 "mask": "0x800", 00:03:58.008 "tpoint_mask": "0x0" 00:03:58.008 }, 00:03:58.008 "iaa": { 00:03:58.008 "mask": "0x1000", 00:03:58.008 "tpoint_mask": "0x0" 00:03:58.008 }, 00:03:58.008 "nvme_tcp": { 00:03:58.008 "mask": "0x2000", 00:03:58.008 "tpoint_mask": "0x0" 00:03:58.008 }, 00:03:58.008 "bdev_nvme": { 00:03:58.008 "mask": "0x4000", 00:03:58.008 "tpoint_mask": "0x0" 00:03:58.008 }, 00:03:58.008 "sock": { 00:03:58.008 "mask": "0x8000", 00:03:58.008 "tpoint_mask": "0x0" 00:03:58.008 }, 00:03:58.008 "blob": { 00:03:58.008 "mask": "0x10000", 00:03:58.008 "tpoint_mask": "0x0" 00:03:58.008 }, 00:03:58.008 "bdev_raid": { 00:03:58.008 "mask": "0x20000", 00:03:58.008 "tpoint_mask": "0x0" 00:03:58.008 }, 00:03:58.008 "scheduler": { 00:03:58.008 "mask": "0x40000", 00:03:58.008 "tpoint_mask": "0x0" 00:03:58.008 } 00:03:58.008 }' 00:03:58.268 03:14:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:58.268 03:14:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:58.268 03:14:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:58.268 03:14:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:58.268 03:14:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:58.268 03:14:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:58.268 03:14:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:58.268 03:14:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:58.268 03:14:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:58.268 03:14:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:58.268 00:03:58.268 real 0m0.174s 00:03:58.268 user 0m0.145s 00:03:58.268 sys 0m0.020s 00:03:58.268 03:14:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.268 03:14:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:58.268 ************************************ 00:03:58.268 END TEST rpc_trace_cmd_test 00:03:58.268 ************************************ 00:03:58.268 03:14:59 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:58.268 03:14:59 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:58.268 03:14:59 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:58.268 03:14:59 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:58.268 03:14:59 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.268 03:14:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.268 ************************************ 00:03:58.268 START TEST rpc_daemon_integrity 00:03:58.268 ************************************ 00:03:58.268 03:14:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:58.268 03:14:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:58.268 03:14:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.268 03:14:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.268 03:14:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.268 03:14:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:58.268 03:14:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:58.528 03:14:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:58.528 03:14:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:58.528 03:14:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.528 03:14:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.528 03:14:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.528 03:14:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:58.528 03:14:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:58.528 03:14:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.528 03:14:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.528 03:14:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.528 03:14:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:58.528 { 00:03:58.528 "name": "Malloc2", 00:03:58.528 "aliases": [ 00:03:58.528 "29554edc-ac6a-4925-99c3-01d6937a6dea" 00:03:58.528 ], 00:03:58.528 "product_name": "Malloc disk", 00:03:58.528 "block_size": 512, 00:03:58.528 "num_blocks": 16384, 00:03:58.528 "uuid": "29554edc-ac6a-4925-99c3-01d6937a6dea", 00:03:58.528 "assigned_rate_limits": { 00:03:58.528 "rw_ios_per_sec": 0, 00:03:58.528 "rw_mbytes_per_sec": 0, 00:03:58.528 "r_mbytes_per_sec": 0, 00:03:58.528 "w_mbytes_per_sec": 0 00:03:58.528 }, 00:03:58.528 "claimed": false, 00:03:58.528 "zoned": false, 00:03:58.528 "supported_io_types": { 00:03:58.528 "read": true, 00:03:58.528 "write": true, 00:03:58.528 "unmap": true, 00:03:58.528 "flush": true, 00:03:58.528 "reset": true, 00:03:58.528 "nvme_admin": false, 00:03:58.528 "nvme_io": false, 00:03:58.528 "nvme_io_md": false, 00:03:58.528 "write_zeroes": true, 00:03:58.528 "zcopy": true, 00:03:58.528 "get_zone_info": false, 00:03:58.528 "zone_management": false, 00:03:58.528 "zone_append": false, 00:03:58.528 "compare": false, 00:03:58.528 "compare_and_write": false, 00:03:58.528 "abort": true, 00:03:58.528 "seek_hole": false, 00:03:58.528 "seek_data": false, 00:03:58.528 "copy": true, 00:03:58.528 "nvme_iov_md": false 00:03:58.528 }, 00:03:58.528 "memory_domains": [ 00:03:58.528 { 00:03:58.528 "dma_device_id": "system", 00:03:58.528 "dma_device_type": 1 00:03:58.528 }, 00:03:58.528 { 00:03:58.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:58.528 "dma_device_type": 2 00:03:58.528 } 00:03:58.528 ], 00:03:58.528 "driver_specific": {} 00:03:58.528 } 00:03:58.528 ]' 00:03:58.528 03:14:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:58.528 03:14:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:58.528 03:14:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:58.528 03:14:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.528 03:14:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.528 [2024-12-13 03:14:59.552537] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:58.528 [2024-12-13 03:14:59.552577] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:58.528 [2024-12-13 03:14:59.552596] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022e80 00:03:58.528 [2024-12-13 03:14:59.552605] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:58.528 [2024-12-13 03:14:59.554518] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:58.528 [2024-12-13 03:14:59.554542] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:58.528 Passthru0 00:03:58.528 03:14:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.528 03:14:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:58.528 03:14:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.528 03:14:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.528 03:14:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.528 03:14:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:58.528 { 00:03:58.528 "name": "Malloc2", 00:03:58.528 "aliases": [ 00:03:58.528 "29554edc-ac6a-4925-99c3-01d6937a6dea" 00:03:58.528 ], 00:03:58.528 "product_name": "Malloc disk", 00:03:58.528 "block_size": 512, 00:03:58.528 "num_blocks": 16384, 00:03:58.528 "uuid": "29554edc-ac6a-4925-99c3-01d6937a6dea", 00:03:58.528 "assigned_rate_limits": { 00:03:58.528 "rw_ios_per_sec": 0, 00:03:58.528 "rw_mbytes_per_sec": 0, 00:03:58.528 "r_mbytes_per_sec": 0, 00:03:58.528 "w_mbytes_per_sec": 0 00:03:58.528 }, 00:03:58.528 "claimed": true, 00:03:58.528 "claim_type": "exclusive_write", 00:03:58.528 "zoned": false, 00:03:58.528 "supported_io_types": { 00:03:58.528 "read": true, 00:03:58.528 "write": true, 00:03:58.528 "unmap": true, 00:03:58.528 "flush": true, 00:03:58.528 "reset": true, 00:03:58.528 "nvme_admin": false, 00:03:58.528 "nvme_io": false, 00:03:58.528 "nvme_io_md": false, 00:03:58.528 "write_zeroes": true, 00:03:58.528 "zcopy": true, 00:03:58.528 "get_zone_info": false, 00:03:58.528 "zone_management": false, 00:03:58.528 "zone_append": false, 00:03:58.528 "compare": false, 00:03:58.528 "compare_and_write": false, 00:03:58.528 "abort": true, 00:03:58.528 "seek_hole": false, 00:03:58.528 "seek_data": false, 00:03:58.528 "copy": true, 00:03:58.528 "nvme_iov_md": false 00:03:58.528 }, 00:03:58.528 "memory_domains": [ 00:03:58.528 { 00:03:58.528 "dma_device_id": "system", 00:03:58.528 "dma_device_type": 1 00:03:58.528 }, 00:03:58.528 { 00:03:58.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:58.528 "dma_device_type": 2 00:03:58.528 } 00:03:58.528 ], 00:03:58.528 "driver_specific": {} 00:03:58.528 }, 00:03:58.528 { 00:03:58.528 "name": "Passthru0", 00:03:58.528 "aliases": [ 00:03:58.528 "7cfd5f7e-7b9a-53d2-96b8-2508ba9cecbe" 00:03:58.528 ], 00:03:58.528 "product_name": "passthru", 00:03:58.529 "block_size": 512, 00:03:58.529 "num_blocks": 16384, 00:03:58.529 "uuid": "7cfd5f7e-7b9a-53d2-96b8-2508ba9cecbe", 00:03:58.529 "assigned_rate_limits": { 00:03:58.529 "rw_ios_per_sec": 0, 00:03:58.529 "rw_mbytes_per_sec": 0, 00:03:58.529 "r_mbytes_per_sec": 0, 00:03:58.529 "w_mbytes_per_sec": 0 00:03:58.529 }, 00:03:58.529 "claimed": false, 00:03:58.529 "zoned": false, 00:03:58.529 "supported_io_types": { 00:03:58.529 "read": true, 00:03:58.529 "write": true, 00:03:58.529 "unmap": true, 00:03:58.529 "flush": true, 00:03:58.529 "reset": true, 00:03:58.529 "nvme_admin": false, 00:03:58.529 "nvme_io": false, 00:03:58.529 "nvme_io_md": false, 00:03:58.529 "write_zeroes": true, 00:03:58.529 "zcopy": true, 00:03:58.529 "get_zone_info": false, 00:03:58.529 "zone_management": false, 00:03:58.529 "zone_append": false, 00:03:58.529 "compare": false, 00:03:58.529 "compare_and_write": false, 00:03:58.529 "abort": true, 00:03:58.529 "seek_hole": false, 00:03:58.529 "seek_data": false, 00:03:58.529 "copy": true, 00:03:58.529 "nvme_iov_md": false 00:03:58.529 }, 00:03:58.529 "memory_domains": [ 00:03:58.529 { 00:03:58.529 "dma_device_id": "system", 00:03:58.529 "dma_device_type": 1 00:03:58.529 }, 00:03:58.529 { 00:03:58.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:58.529 "dma_device_type": 2 00:03:58.529 } 00:03:58.529 ], 00:03:58.529 "driver_specific": { 00:03:58.529 "passthru": { 00:03:58.529 "name": "Passthru0", 00:03:58.529 "base_bdev_name": "Malloc2" 00:03:58.529 } 00:03:58.529 } 00:03:58.529 } 00:03:58.529 ]' 00:03:58.529 03:14:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:58.529 03:14:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:58.529 03:14:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:58.529 03:14:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.529 03:14:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.529 03:14:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.529 03:14:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:58.529 03:14:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.529 03:14:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.529 03:14:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.529 03:14:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:58.529 03:14:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.529 03:14:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.529 03:14:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.529 03:14:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:58.529 03:14:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:58.529 03:14:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:58.529 00:03:58.529 real 0m0.262s 00:03:58.529 user 0m0.139s 00:03:58.529 sys 0m0.032s 00:03:58.529 03:14:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.529 03:14:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.529 ************************************ 00:03:58.529 END TEST rpc_daemon_integrity 00:03:58.529 ************************************ 00:03:58.529 03:14:59 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:58.529 03:14:59 rpc -- rpc/rpc.sh@84 -- # killprocess 2455198 00:03:58.529 03:14:59 rpc -- common/autotest_common.sh@954 -- # '[' -z 2455198 ']' 00:03:58.529 03:14:59 rpc -- common/autotest_common.sh@958 -- # kill -0 2455198 00:03:58.788 03:14:59 rpc -- common/autotest_common.sh@959 -- # uname 00:03:58.788 03:14:59 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:58.788 03:14:59 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2455198 00:03:58.788 03:14:59 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:58.788 03:14:59 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:58.788 03:14:59 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2455198' 00:03:58.788 killing process with pid 2455198 00:03:58.788 03:14:59 rpc -- common/autotest_common.sh@973 -- # kill 2455198 00:03:58.789 03:14:59 rpc -- common/autotest_common.sh@978 -- # wait 2455198 00:04:01.326 00:04:01.326 real 0m4.710s 00:04:01.326 user 0m5.196s 00:04:01.326 sys 0m0.800s 00:04:01.326 03:15:02 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.326 03:15:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.326 ************************************ 00:04:01.326 END TEST rpc 00:04:01.326 ************************************ 00:04:01.326 03:15:02 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:01.326 03:15:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.326 03:15:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.326 03:15:02 -- common/autotest_common.sh@10 -- # set +x 00:04:01.326 ************************************ 00:04:01.326 START TEST skip_rpc 00:04:01.326 ************************************ 00:04:01.327 03:15:02 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:01.327 * Looking for test storage... 00:04:01.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:01.327 03:15:02 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:01.327 03:15:02 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:01.327 03:15:02 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:01.327 03:15:02 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:01.327 03:15:02 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:01.327 03:15:02 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:01.327 03:15:02 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:01.327 03:15:02 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:01.327 03:15:02 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:01.327 03:15:02 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:01.327 03:15:02 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:01.327 03:15:02 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:01.327 03:15:02 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:01.327 03:15:02 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:01.327 03:15:02 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:01.327 03:15:02 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:01.327 03:15:02 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:01.327 03:15:02 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:01.327 03:15:02 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:01.327 03:15:02 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:01.327 03:15:02 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:01.327 03:15:02 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:01.327 03:15:02 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:01.327 03:15:02 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:01.327 03:15:02 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:01.327 03:15:02 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:01.327 03:15:02 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:01.327 03:15:02 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:01.327 03:15:02 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:01.327 03:15:02 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:01.327 03:15:02 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:01.327 03:15:02 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:01.327 03:15:02 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:01.327 03:15:02 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:01.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.327 --rc genhtml_branch_coverage=1 00:04:01.327 --rc genhtml_function_coverage=1 00:04:01.327 --rc genhtml_legend=1 00:04:01.327 --rc geninfo_all_blocks=1 00:04:01.327 --rc geninfo_unexecuted_blocks=1 00:04:01.327 00:04:01.327 ' 00:04:01.327 03:15:02 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:01.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.327 --rc genhtml_branch_coverage=1 00:04:01.327 --rc genhtml_function_coverage=1 00:04:01.327 --rc genhtml_legend=1 00:04:01.327 --rc geninfo_all_blocks=1 00:04:01.327 --rc geninfo_unexecuted_blocks=1 00:04:01.327 00:04:01.327 ' 00:04:01.327 03:15:02 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:01.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.327 --rc genhtml_branch_coverage=1 00:04:01.327 --rc genhtml_function_coverage=1 00:04:01.327 --rc genhtml_legend=1 00:04:01.327 --rc geninfo_all_blocks=1 00:04:01.327 --rc geninfo_unexecuted_blocks=1 00:04:01.327 00:04:01.327 ' 00:04:01.327 03:15:02 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:01.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.327 --rc genhtml_branch_coverage=1 00:04:01.327 --rc genhtml_function_coverage=1 00:04:01.327 --rc genhtml_legend=1 00:04:01.327 --rc geninfo_all_blocks=1 00:04:01.327 --rc geninfo_unexecuted_blocks=1 00:04:01.327 00:04:01.327 ' 00:04:01.327 03:15:02 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:01.327 03:15:02 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:01.327 03:15:02 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:01.327 03:15:02 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.327 03:15:02 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.327 03:15:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.327 ************************************ 00:04:01.327 START TEST skip_rpc 00:04:01.327 ************************************ 00:04:01.327 03:15:02 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:01.327 03:15:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2456196 00:04:01.327 03:15:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:01.327 03:15:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:01.327 03:15:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:01.327 [2024-12-13 03:15:02.471207] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:01.327 [2024-12-13 03:15:02.471286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2456196 ] 00:04:01.587 [2024-12-13 03:15:02.584053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.587 [2024-12-13 03:15:02.688634] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.868 03:15:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:06.868 03:15:07 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:06.868 03:15:07 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:06.868 03:15:07 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:06.868 03:15:07 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:06.868 03:15:07 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:06.868 03:15:07 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:06.868 03:15:07 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:06.868 03:15:07 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.868 03:15:07 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.868 03:15:07 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:06.868 03:15:07 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:06.868 03:15:07 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:06.868 03:15:07 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:06.868 03:15:07 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:06.868 03:15:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:06.868 03:15:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2456196 00:04:06.868 03:15:07 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2456196 ']' 00:04:06.868 03:15:07 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2456196 00:04:06.868 03:15:07 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:06.868 03:15:07 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:06.868 03:15:07 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2456196 00:04:06.868 03:15:07 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:06.868 03:15:07 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:06.868 03:15:07 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2456196' 00:04:06.868 killing process with pid 2456196 00:04:06.868 03:15:07 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2456196 00:04:06.868 03:15:07 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2456196 00:04:08.773 00:04:08.773 real 0m7.400s 00:04:08.773 user 0m7.025s 00:04:08.773 sys 0m0.393s 00:04:08.773 03:15:09 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.773 03:15:09 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.773 ************************************ 00:04:08.773 END TEST skip_rpc 00:04:08.773 ************************************ 00:04:08.773 03:15:09 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:08.773 03:15:09 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.773 03:15:09 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.773 03:15:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.773 ************************************ 00:04:08.773 START TEST skip_rpc_with_json 00:04:08.773 ************************************ 00:04:08.773 03:15:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:08.773 03:15:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:08.773 03:15:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2457838 00:04:08.773 03:15:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:08.773 03:15:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:08.773 03:15:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2457838 00:04:08.773 03:15:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2457838 ']' 00:04:08.773 03:15:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:08.773 03:15:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:08.773 03:15:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:08.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:08.773 03:15:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:08.773 03:15:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:08.773 [2024-12-13 03:15:09.948129] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:08.773 [2024-12-13 03:15:09.948231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2457838 ] 00:04:09.032 [2024-12-13 03:15:10.066467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.032 [2024-12-13 03:15:10.171220] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.968 03:15:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:09.968 03:15:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:09.968 03:15:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:09.968 03:15:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.968 03:15:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:09.968 [2024-12-13 03:15:11.016127] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:09.968 request: 00:04:09.968 { 00:04:09.968 "trtype": "tcp", 00:04:09.968 "method": "nvmf_get_transports", 00:04:09.968 "req_id": 1 00:04:09.968 } 00:04:09.968 Got JSON-RPC error response 00:04:09.968 response: 00:04:09.968 { 00:04:09.968 "code": -19, 00:04:09.968 "message": "No such device" 00:04:09.968 } 00:04:09.968 03:15:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:09.968 03:15:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:09.968 03:15:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.968 03:15:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:09.968 [2024-12-13 03:15:11.028240] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:09.968 03:15:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.968 03:15:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:09.968 03:15:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.968 03:15:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:10.227 03:15:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.227 03:15:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:10.227 { 00:04:10.227 "subsystems": [ 00:04:10.227 { 00:04:10.227 "subsystem": "fsdev", 00:04:10.227 "config": [ 00:04:10.227 { 00:04:10.228 "method": "fsdev_set_opts", 00:04:10.228 "params": { 00:04:10.228 "fsdev_io_pool_size": 65535, 00:04:10.228 "fsdev_io_cache_size": 256 00:04:10.228 } 00:04:10.228 } 00:04:10.228 ] 00:04:10.228 }, 00:04:10.228 { 00:04:10.228 "subsystem": "keyring", 00:04:10.228 "config": [] 00:04:10.228 }, 00:04:10.228 { 00:04:10.228 "subsystem": "iobuf", 00:04:10.228 "config": [ 00:04:10.228 { 00:04:10.228 "method": "iobuf_set_options", 00:04:10.228 "params": { 00:04:10.228 "small_pool_count": 8192, 00:04:10.228 "large_pool_count": 1024, 00:04:10.228 "small_bufsize": 8192, 00:04:10.228 "large_bufsize": 135168, 00:04:10.228 "enable_numa": false 00:04:10.228 } 00:04:10.228 } 00:04:10.228 ] 00:04:10.228 }, 00:04:10.228 { 00:04:10.228 "subsystem": "sock", 00:04:10.228 "config": [ 00:04:10.228 { 00:04:10.228 "method": "sock_set_default_impl", 00:04:10.228 "params": { 00:04:10.228 "impl_name": "posix" 00:04:10.228 } 00:04:10.228 }, 00:04:10.228 { 00:04:10.228 "method": "sock_impl_set_options", 00:04:10.228 "params": { 00:04:10.228 "impl_name": "ssl", 00:04:10.228 "recv_buf_size": 4096, 00:04:10.228 "send_buf_size": 4096, 00:04:10.228 "enable_recv_pipe": true, 00:04:10.228 "enable_quickack": false, 00:04:10.228 "enable_placement_id": 0, 00:04:10.228 "enable_zerocopy_send_server": true, 00:04:10.228 "enable_zerocopy_send_client": false, 00:04:10.228 "zerocopy_threshold": 0, 00:04:10.228 "tls_version": 0, 00:04:10.228 "enable_ktls": false 00:04:10.228 } 00:04:10.228 }, 00:04:10.228 { 00:04:10.228 "method": "sock_impl_set_options", 00:04:10.228 "params": { 00:04:10.228 "impl_name": "posix", 00:04:10.228 "recv_buf_size": 2097152, 00:04:10.228 "send_buf_size": 2097152, 00:04:10.228 "enable_recv_pipe": true, 00:04:10.228 "enable_quickack": false, 00:04:10.228 "enable_placement_id": 0, 00:04:10.228 "enable_zerocopy_send_server": true, 00:04:10.228 "enable_zerocopy_send_client": false, 00:04:10.228 "zerocopy_threshold": 0, 00:04:10.228 "tls_version": 0, 00:04:10.228 "enable_ktls": false 00:04:10.228 } 00:04:10.228 } 00:04:10.228 ] 00:04:10.228 }, 00:04:10.228 { 00:04:10.228 "subsystem": "vmd", 00:04:10.228 "config": [] 00:04:10.228 }, 00:04:10.228 { 00:04:10.228 "subsystem": "accel", 00:04:10.228 "config": [ 00:04:10.228 { 00:04:10.228 "method": "accel_set_options", 00:04:10.228 "params": { 00:04:10.228 "small_cache_size": 128, 00:04:10.228 "large_cache_size": 16, 00:04:10.228 "task_count": 2048, 00:04:10.228 "sequence_count": 2048, 00:04:10.228 "buf_count": 2048 00:04:10.228 } 00:04:10.228 } 00:04:10.228 ] 00:04:10.228 }, 00:04:10.228 { 00:04:10.228 "subsystem": "bdev", 00:04:10.228 "config": [ 00:04:10.228 { 00:04:10.228 "method": "bdev_set_options", 00:04:10.228 "params": { 00:04:10.228 "bdev_io_pool_size": 65535, 00:04:10.228 "bdev_io_cache_size": 256, 00:04:10.228 "bdev_auto_examine": true, 00:04:10.228 "iobuf_small_cache_size": 128, 00:04:10.228 "iobuf_large_cache_size": 16 00:04:10.228 } 00:04:10.228 }, 00:04:10.228 { 00:04:10.228 "method": "bdev_raid_set_options", 00:04:10.228 "params": { 00:04:10.228 "process_window_size_kb": 1024, 00:04:10.228 "process_max_bandwidth_mb_sec": 0 00:04:10.228 } 00:04:10.228 }, 00:04:10.228 { 00:04:10.228 "method": "bdev_iscsi_set_options", 00:04:10.228 "params": { 00:04:10.228 "timeout_sec": 30 00:04:10.228 } 00:04:10.228 }, 00:04:10.228 { 00:04:10.228 "method": "bdev_nvme_set_options", 00:04:10.228 "params": { 00:04:10.228 "action_on_timeout": "none", 00:04:10.228 "timeout_us": 0, 00:04:10.228 "timeout_admin_us": 0, 00:04:10.228 "keep_alive_timeout_ms": 10000, 00:04:10.228 "arbitration_burst": 0, 00:04:10.228 "low_priority_weight": 0, 00:04:10.228 "medium_priority_weight": 0, 00:04:10.228 "high_priority_weight": 0, 00:04:10.228 "nvme_adminq_poll_period_us": 10000, 00:04:10.228 "nvme_ioq_poll_period_us": 0, 00:04:10.228 "io_queue_requests": 0, 00:04:10.228 "delay_cmd_submit": true, 00:04:10.228 "transport_retry_count": 4, 00:04:10.228 "bdev_retry_count": 3, 00:04:10.228 "transport_ack_timeout": 0, 00:04:10.228 "ctrlr_loss_timeout_sec": 0, 00:04:10.228 "reconnect_delay_sec": 0, 00:04:10.228 "fast_io_fail_timeout_sec": 0, 00:04:10.228 "disable_auto_failback": false, 00:04:10.228 "generate_uuids": false, 00:04:10.228 "transport_tos": 0, 00:04:10.228 "nvme_error_stat": false, 00:04:10.228 "rdma_srq_size": 0, 00:04:10.228 "io_path_stat": false, 00:04:10.228 "allow_accel_sequence": false, 00:04:10.228 "rdma_max_cq_size": 0, 00:04:10.228 "rdma_cm_event_timeout_ms": 0, 00:04:10.228 "dhchap_digests": [ 00:04:10.228 "sha256", 00:04:10.228 "sha384", 00:04:10.228 "sha512" 00:04:10.228 ], 00:04:10.228 "dhchap_dhgroups": [ 00:04:10.228 "null", 00:04:10.228 "ffdhe2048", 00:04:10.228 "ffdhe3072", 00:04:10.228 "ffdhe4096", 00:04:10.228 "ffdhe6144", 00:04:10.228 "ffdhe8192" 00:04:10.228 ], 00:04:10.228 "rdma_umr_per_io": false 00:04:10.228 } 00:04:10.228 }, 00:04:10.228 { 00:04:10.228 "method": "bdev_nvme_set_hotplug", 00:04:10.228 "params": { 00:04:10.228 "period_us": 100000, 00:04:10.228 "enable": false 00:04:10.228 } 00:04:10.228 }, 00:04:10.228 { 00:04:10.228 "method": "bdev_wait_for_examine" 00:04:10.228 } 00:04:10.228 ] 00:04:10.228 }, 00:04:10.228 { 00:04:10.228 "subsystem": "scsi", 00:04:10.228 "config": null 00:04:10.228 }, 00:04:10.228 { 00:04:10.228 "subsystem": "scheduler", 00:04:10.228 "config": [ 00:04:10.228 { 00:04:10.228 "method": "framework_set_scheduler", 00:04:10.228 "params": { 00:04:10.228 "name": "static" 00:04:10.228 } 00:04:10.228 } 00:04:10.228 ] 00:04:10.228 }, 00:04:10.228 { 00:04:10.228 "subsystem": "vhost_scsi", 00:04:10.228 "config": [] 00:04:10.228 }, 00:04:10.228 { 00:04:10.228 "subsystem": "vhost_blk", 00:04:10.228 "config": [] 00:04:10.228 }, 00:04:10.228 { 00:04:10.228 "subsystem": "ublk", 00:04:10.228 "config": [] 00:04:10.228 }, 00:04:10.228 { 00:04:10.228 "subsystem": "nbd", 00:04:10.228 "config": [] 00:04:10.228 }, 00:04:10.228 { 00:04:10.228 "subsystem": "nvmf", 00:04:10.228 "config": [ 00:04:10.228 { 00:04:10.228 "method": "nvmf_set_config", 00:04:10.228 "params": { 00:04:10.228 "discovery_filter": "match_any", 00:04:10.228 "admin_cmd_passthru": { 00:04:10.228 "identify_ctrlr": false 00:04:10.228 }, 00:04:10.228 "dhchap_digests": [ 00:04:10.228 "sha256", 00:04:10.228 "sha384", 00:04:10.228 "sha512" 00:04:10.228 ], 00:04:10.228 "dhchap_dhgroups": [ 00:04:10.228 "null", 00:04:10.228 "ffdhe2048", 00:04:10.228 "ffdhe3072", 00:04:10.228 "ffdhe4096", 00:04:10.228 "ffdhe6144", 00:04:10.228 "ffdhe8192" 00:04:10.228 ] 00:04:10.228 } 00:04:10.228 }, 00:04:10.229 { 00:04:10.229 "method": "nvmf_set_max_subsystems", 00:04:10.229 "params": { 00:04:10.229 "max_subsystems": 1024 00:04:10.229 } 00:04:10.229 }, 00:04:10.229 { 00:04:10.229 "method": "nvmf_set_crdt", 00:04:10.229 "params": { 00:04:10.229 "crdt1": 0, 00:04:10.229 "crdt2": 0, 00:04:10.229 "crdt3": 0 00:04:10.229 } 00:04:10.229 }, 00:04:10.229 { 00:04:10.229 "method": "nvmf_create_transport", 00:04:10.229 "params": { 00:04:10.229 "trtype": "TCP", 00:04:10.229 "max_queue_depth": 128, 00:04:10.229 "max_io_qpairs_per_ctrlr": 127, 00:04:10.229 "in_capsule_data_size": 4096, 00:04:10.229 "max_io_size": 131072, 00:04:10.229 "io_unit_size": 131072, 00:04:10.229 "max_aq_depth": 128, 00:04:10.229 "num_shared_buffers": 511, 00:04:10.229 "buf_cache_size": 4294967295, 00:04:10.229 "dif_insert_or_strip": false, 00:04:10.229 "zcopy": false, 00:04:10.229 "c2h_success": true, 00:04:10.229 "sock_priority": 0, 00:04:10.229 "abort_timeout_sec": 1, 00:04:10.229 "ack_timeout": 0, 00:04:10.229 "data_wr_pool_size": 0 00:04:10.229 } 00:04:10.229 } 00:04:10.229 ] 00:04:10.229 }, 00:04:10.229 { 00:04:10.229 "subsystem": "iscsi", 00:04:10.229 "config": [ 00:04:10.229 { 00:04:10.229 "method": "iscsi_set_options", 00:04:10.229 "params": { 00:04:10.229 "node_base": "iqn.2016-06.io.spdk", 00:04:10.229 "max_sessions": 128, 00:04:10.229 "max_connections_per_session": 2, 00:04:10.229 "max_queue_depth": 64, 00:04:10.229 "default_time2wait": 2, 00:04:10.229 "default_time2retain": 20, 00:04:10.229 "first_burst_length": 8192, 00:04:10.229 "immediate_data": true, 00:04:10.229 "allow_duplicated_isid": false, 00:04:10.229 "error_recovery_level": 0, 00:04:10.229 "nop_timeout": 60, 00:04:10.229 "nop_in_interval": 30, 00:04:10.229 "disable_chap": false, 00:04:10.229 "require_chap": false, 00:04:10.229 "mutual_chap": false, 00:04:10.229 "chap_group": 0, 00:04:10.229 "max_large_datain_per_connection": 64, 00:04:10.229 "max_r2t_per_connection": 4, 00:04:10.229 "pdu_pool_size": 36864, 00:04:10.229 "immediate_data_pool_size": 16384, 00:04:10.229 "data_out_pool_size": 2048 00:04:10.229 } 00:04:10.229 } 00:04:10.229 ] 00:04:10.229 } 00:04:10.229 ] 00:04:10.229 } 00:04:10.229 03:15:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:10.229 03:15:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2457838 00:04:10.229 03:15:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2457838 ']' 00:04:10.229 03:15:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2457838 00:04:10.229 03:15:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:10.229 03:15:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:10.229 03:15:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2457838 00:04:10.229 03:15:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:10.229 03:15:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:10.229 03:15:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2457838' 00:04:10.229 killing process with pid 2457838 00:04:10.229 03:15:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2457838 00:04:10.229 03:15:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2457838 00:04:12.763 03:15:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2458627 00:04:12.763 03:15:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:12.763 03:15:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:18.028 03:15:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2458627 00:04:18.028 03:15:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2458627 ']' 00:04:18.028 03:15:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2458627 00:04:18.028 03:15:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:18.028 03:15:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:18.028 03:15:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2458627 00:04:18.028 03:15:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:18.028 03:15:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:18.028 03:15:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2458627' 00:04:18.028 killing process with pid 2458627 00:04:18.028 03:15:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2458627 00:04:18.028 03:15:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2458627 00:04:19.930 03:15:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:19.930 03:15:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:19.930 00:04:19.930 real 0m11.127s 00:04:19.930 user 0m10.720s 00:04:19.930 sys 0m0.865s 00:04:19.930 03:15:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.930 03:15:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:19.930 ************************************ 00:04:19.930 END TEST skip_rpc_with_json 00:04:19.930 ************************************ 00:04:19.930 03:15:21 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:19.930 03:15:21 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.930 03:15:21 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.930 03:15:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.930 ************************************ 00:04:19.930 START TEST skip_rpc_with_delay 00:04:19.930 ************************************ 00:04:19.930 03:15:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:19.930 03:15:21 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:19.930 03:15:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:19.930 03:15:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:19.930 03:15:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:19.930 03:15:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:19.930 03:15:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:19.930 03:15:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:19.930 03:15:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:19.930 03:15:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:19.930 03:15:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:19.930 03:15:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:19.930 03:15:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:19.930 [2024-12-13 03:15:21.128520] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:20.189 03:15:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:20.189 03:15:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:20.189 03:15:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:20.189 03:15:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:20.189 00:04:20.189 real 0m0.130s 00:04:20.189 user 0m0.083s 00:04:20.189 sys 0m0.046s 00:04:20.189 03:15:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.189 03:15:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:20.189 ************************************ 00:04:20.189 END TEST skip_rpc_with_delay 00:04:20.189 ************************************ 00:04:20.189 03:15:21 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:20.189 03:15:21 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:20.189 03:15:21 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:20.189 03:15:21 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.189 03:15:21 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.189 03:15:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.189 ************************************ 00:04:20.189 START TEST exit_on_failed_rpc_init 00:04:20.189 ************************************ 00:04:20.189 03:15:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:20.189 03:15:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2459812 00:04:20.189 03:15:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2459812 00:04:20.189 03:15:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:20.189 03:15:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2459812 ']' 00:04:20.189 03:15:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.189 03:15:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:20.189 03:15:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.189 03:15:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:20.189 03:15:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:20.189 [2024-12-13 03:15:21.342425] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:20.189 [2024-12-13 03:15:21.342512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2459812 ] 00:04:20.448 [2024-12-13 03:15:21.457367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.448 [2024-12-13 03:15:21.563750] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.383 03:15:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:21.383 03:15:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:21.383 03:15:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:21.383 03:15:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:21.383 03:15:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:21.383 03:15:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:21.383 03:15:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:21.383 03:15:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:21.383 03:15:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:21.383 03:15:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:21.383 03:15:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:21.383 03:15:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:21.383 03:15:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:21.383 03:15:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:21.383 03:15:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:21.383 [2024-12-13 03:15:22.470651] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:21.383 [2024-12-13 03:15:22.470734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2460042 ] 00:04:21.383 [2024-12-13 03:15:22.581904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.641 [2024-12-13 03:15:22.691595] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:21.641 [2024-12-13 03:15:22.691690] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:21.641 [2024-12-13 03:15:22.691709] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:21.641 [2024-12-13 03:15:22.691720] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:21.900 03:15:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:21.900 03:15:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:21.900 03:15:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:21.900 03:15:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:21.900 03:15:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:21.900 03:15:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:21.900 03:15:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:21.900 03:15:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2459812 00:04:21.900 03:15:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2459812 ']' 00:04:21.900 03:15:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2459812 00:04:21.900 03:15:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:21.900 03:15:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:21.900 03:15:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2459812 00:04:21.900 03:15:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:21.900 03:15:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:21.900 03:15:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2459812' 00:04:21.900 killing process with pid 2459812 00:04:21.900 03:15:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2459812 00:04:21.900 03:15:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2459812 00:04:24.431 00:04:24.431 real 0m4.038s 00:04:24.431 user 0m4.358s 00:04:24.431 sys 0m0.619s 00:04:24.431 03:15:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.431 03:15:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:24.431 ************************************ 00:04:24.431 END TEST exit_on_failed_rpc_init 00:04:24.431 ************************************ 00:04:24.431 03:15:25 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:24.431 00:04:24.431 real 0m23.157s 00:04:24.431 user 0m22.389s 00:04:24.431 sys 0m2.215s 00:04:24.431 03:15:25 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.431 03:15:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.431 ************************************ 00:04:24.431 END TEST skip_rpc 00:04:24.431 ************************************ 00:04:24.431 03:15:25 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:24.431 03:15:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.431 03:15:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.431 03:15:25 -- common/autotest_common.sh@10 -- # set +x 00:04:24.431 ************************************ 00:04:24.431 START TEST rpc_client 00:04:24.431 ************************************ 00:04:24.431 03:15:25 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:24.431 * Looking for test storage... 00:04:24.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:24.431 03:15:25 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:24.431 03:15:25 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:24.431 03:15:25 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:24.431 03:15:25 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:24.431 03:15:25 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.431 03:15:25 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.431 03:15:25 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.431 03:15:25 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.431 03:15:25 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.431 03:15:25 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.431 03:15:25 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.431 03:15:25 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.432 03:15:25 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.432 03:15:25 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.432 03:15:25 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.432 03:15:25 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:24.432 03:15:25 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:24.432 03:15:25 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.432 03:15:25 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.432 03:15:25 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:24.432 03:15:25 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:24.432 03:15:25 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.432 03:15:25 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:24.432 03:15:25 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.432 03:15:25 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:24.432 03:15:25 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:24.432 03:15:25 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.432 03:15:25 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:24.432 03:15:25 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.432 03:15:25 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.432 03:15:25 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.432 03:15:25 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:24.432 03:15:25 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.432 03:15:25 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:24.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.432 --rc genhtml_branch_coverage=1 00:04:24.432 --rc genhtml_function_coverage=1 00:04:24.432 --rc genhtml_legend=1 00:04:24.432 --rc geninfo_all_blocks=1 00:04:24.432 --rc geninfo_unexecuted_blocks=1 00:04:24.432 00:04:24.432 ' 00:04:24.432 03:15:25 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:24.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.432 --rc genhtml_branch_coverage=1 00:04:24.432 --rc genhtml_function_coverage=1 00:04:24.432 --rc genhtml_legend=1 00:04:24.432 --rc geninfo_all_blocks=1 00:04:24.432 --rc geninfo_unexecuted_blocks=1 00:04:24.432 00:04:24.432 ' 00:04:24.432 03:15:25 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:24.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.432 --rc genhtml_branch_coverage=1 00:04:24.432 --rc genhtml_function_coverage=1 00:04:24.432 --rc genhtml_legend=1 00:04:24.432 --rc geninfo_all_blocks=1 00:04:24.432 --rc geninfo_unexecuted_blocks=1 00:04:24.432 00:04:24.432 ' 00:04:24.432 03:15:25 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:24.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.432 --rc genhtml_branch_coverage=1 00:04:24.432 --rc genhtml_function_coverage=1 00:04:24.432 --rc genhtml_legend=1 00:04:24.432 --rc geninfo_all_blocks=1 00:04:24.432 --rc geninfo_unexecuted_blocks=1 00:04:24.432 00:04:24.432 ' 00:04:24.432 03:15:25 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:24.432 OK 00:04:24.432 03:15:25 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:24.432 00:04:24.432 real 0m0.231s 00:04:24.432 user 0m0.133s 00:04:24.432 sys 0m0.111s 00:04:24.432 03:15:25 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.432 03:15:25 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:24.432 ************************************ 00:04:24.432 END TEST rpc_client 00:04:24.432 ************************************ 00:04:24.691 03:15:25 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:24.691 03:15:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.691 03:15:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.691 03:15:25 -- common/autotest_common.sh@10 -- # set +x 00:04:24.691 ************************************ 00:04:24.691 START TEST json_config 00:04:24.691 ************************************ 00:04:24.691 03:15:25 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:24.691 03:15:25 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:24.691 03:15:25 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:24.691 03:15:25 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:24.691 03:15:25 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:24.691 03:15:25 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.691 03:15:25 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.691 03:15:25 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.691 03:15:25 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.691 03:15:25 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.691 03:15:25 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.691 03:15:25 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.691 03:15:25 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.691 03:15:25 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.691 03:15:25 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.691 03:15:25 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.691 03:15:25 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:24.691 03:15:25 json_config -- scripts/common.sh@345 -- # : 1 00:04:24.691 03:15:25 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.691 03:15:25 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.691 03:15:25 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:24.691 03:15:25 json_config -- scripts/common.sh@353 -- # local d=1 00:04:24.691 03:15:25 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.691 03:15:25 json_config -- scripts/common.sh@355 -- # echo 1 00:04:24.691 03:15:25 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.691 03:15:25 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:24.691 03:15:25 json_config -- scripts/common.sh@353 -- # local d=2 00:04:24.691 03:15:25 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.691 03:15:25 json_config -- scripts/common.sh@355 -- # echo 2 00:04:24.691 03:15:25 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.691 03:15:25 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.691 03:15:25 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.691 03:15:25 json_config -- scripts/common.sh@368 -- # return 0 00:04:24.691 03:15:25 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.692 03:15:25 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:24.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.692 --rc genhtml_branch_coverage=1 00:04:24.692 --rc genhtml_function_coverage=1 00:04:24.692 --rc genhtml_legend=1 00:04:24.692 --rc geninfo_all_blocks=1 00:04:24.692 --rc geninfo_unexecuted_blocks=1 00:04:24.692 00:04:24.692 ' 00:04:24.692 03:15:25 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:24.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.692 --rc genhtml_branch_coverage=1 00:04:24.692 --rc genhtml_function_coverage=1 00:04:24.692 --rc genhtml_legend=1 00:04:24.692 --rc geninfo_all_blocks=1 00:04:24.692 --rc geninfo_unexecuted_blocks=1 00:04:24.692 00:04:24.692 ' 00:04:24.692 03:15:25 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:24.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.692 --rc genhtml_branch_coverage=1 00:04:24.692 --rc genhtml_function_coverage=1 00:04:24.692 --rc genhtml_legend=1 00:04:24.692 --rc geninfo_all_blocks=1 00:04:24.692 --rc geninfo_unexecuted_blocks=1 00:04:24.692 00:04:24.692 ' 00:04:24.692 03:15:25 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:24.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.692 --rc genhtml_branch_coverage=1 00:04:24.692 --rc genhtml_function_coverage=1 00:04:24.692 --rc genhtml_legend=1 00:04:24.692 --rc geninfo_all_blocks=1 00:04:24.692 --rc geninfo_unexecuted_blocks=1 00:04:24.692 00:04:24.692 ' 00:04:24.692 03:15:25 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:24.692 03:15:25 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:24.692 03:15:25 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:24.692 03:15:25 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:24.692 03:15:25 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:24.692 03:15:25 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:24.692 03:15:25 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:24.692 03:15:25 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:24.692 03:15:25 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:24.692 03:15:25 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:24.692 03:15:25 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:24.692 03:15:25 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:24.692 03:15:25 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:04:24.692 03:15:25 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:04:24.692 03:15:25 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:24.692 03:15:25 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:24.692 03:15:25 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:24.692 03:15:25 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:24.692 03:15:25 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:24.692 03:15:25 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:24.692 03:15:25 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:24.692 03:15:25 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:24.692 03:15:25 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:24.692 03:15:25 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.692 03:15:25 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.692 03:15:25 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.692 03:15:25 json_config -- paths/export.sh@5 -- # export PATH 00:04:24.692 03:15:25 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.692 03:15:25 json_config -- nvmf/common.sh@51 -- # : 0 00:04:24.692 03:15:25 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:24.692 03:15:25 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:24.692 03:15:25 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:24.692 03:15:25 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:24.692 03:15:25 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:24.692 03:15:25 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:24.692 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:24.692 03:15:25 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:24.692 03:15:25 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:24.692 03:15:25 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:24.692 03:15:25 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:24.692 03:15:25 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:24.692 03:15:25 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:24.692 03:15:25 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:24.692 03:15:25 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:24.692 03:15:25 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:24.692 03:15:25 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:24.692 03:15:25 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:24.692 03:15:25 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:24.692 03:15:25 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:24.692 03:15:25 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:24.692 03:15:25 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:24.692 03:15:25 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:24.692 03:15:25 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:24.692 03:15:25 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:24.692 03:15:25 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:24.692 INFO: JSON configuration test init 00:04:24.692 03:15:25 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:24.692 03:15:25 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:24.692 03:15:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:24.692 03:15:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.692 03:15:25 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:24.692 03:15:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:24.692 03:15:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.692 03:15:25 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:24.692 03:15:25 json_config -- json_config/common.sh@9 -- # local app=target 00:04:24.692 03:15:25 json_config -- json_config/common.sh@10 -- # shift 00:04:24.692 03:15:25 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:24.692 03:15:25 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:24.692 03:15:25 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:24.692 03:15:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.692 03:15:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.692 03:15:25 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2460833 00:04:24.692 03:15:25 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:24.692 Waiting for target to run... 00:04:24.692 03:15:25 json_config -- json_config/common.sh@25 -- # waitforlisten 2460833 /var/tmp/spdk_tgt.sock 00:04:24.692 03:15:25 json_config -- common/autotest_common.sh@835 -- # '[' -z 2460833 ']' 00:04:24.692 03:15:25 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:24.692 03:15:25 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:24.692 03:15:25 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:24.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:24.692 03:15:25 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:24.692 03:15:25 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:24.692 03:15:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.951 [2024-12-13 03:15:25.970040] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:24.951 [2024-12-13 03:15:25.970136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2460833 ] 00:04:25.209 [2024-12-13 03:15:26.293458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.209 [2024-12-13 03:15:26.391382] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.777 03:15:26 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:25.777 03:15:26 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:25.777 03:15:26 json_config -- json_config/common.sh@26 -- # echo '' 00:04:25.777 00:04:25.777 03:15:26 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:25.777 03:15:26 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:25.777 03:15:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:25.777 03:15:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.777 03:15:26 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:25.777 03:15:26 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:25.777 03:15:26 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:25.777 03:15:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.777 03:15:26 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:25.777 03:15:26 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:25.777 03:15:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:30.084 03:15:30 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:30.084 03:15:30 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:30.084 03:15:30 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:30.084 03:15:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.084 03:15:30 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:30.084 03:15:30 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:30.084 03:15:30 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:30.084 03:15:30 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:30.084 03:15:30 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:30.084 03:15:30 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:30.084 03:15:30 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:30.084 03:15:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:30.084 03:15:30 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:30.084 03:15:30 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:30.084 03:15:30 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:30.084 03:15:30 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:30.084 03:15:30 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:30.084 03:15:30 json_config -- json_config/json_config.sh@54 -- # sort 00:04:30.084 03:15:30 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:30.085 03:15:30 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:30.085 03:15:30 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:30.085 03:15:30 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:30.085 03:15:30 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:30.085 03:15:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.085 03:15:30 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:30.085 03:15:30 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:30.085 03:15:30 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:30.085 03:15:30 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:30.085 03:15:30 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:30.085 03:15:30 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:30.085 03:15:30 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:30.085 03:15:30 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:30.085 03:15:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.085 03:15:30 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:30.085 03:15:30 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:30.085 03:15:30 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:30.085 03:15:30 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:30.085 03:15:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:30.085 MallocForNvmf0 00:04:30.085 03:15:30 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:30.085 03:15:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:30.085 MallocForNvmf1 00:04:30.085 03:15:31 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:30.085 03:15:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:30.343 [2024-12-13 03:15:31.313989] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:30.343 03:15:31 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:30.343 03:15:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:30.343 03:15:31 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:30.343 03:15:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:30.601 03:15:31 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:30.601 03:15:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:30.860 03:15:31 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:30.860 03:15:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:30.860 [2024-12-13 03:15:32.028335] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:30.860 03:15:32 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:30.860 03:15:32 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:30.860 03:15:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.119 03:15:32 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:31.119 03:15:32 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:31.119 03:15:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.119 03:15:32 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:31.119 03:15:32 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:31.119 03:15:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:31.119 MallocBdevForConfigChangeCheck 00:04:31.119 03:15:32 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:31.119 03:15:32 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:31.119 03:15:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.377 03:15:32 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:31.377 03:15:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:31.636 03:15:32 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:31.636 INFO: shutting down applications... 00:04:31.636 03:15:32 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:31.636 03:15:32 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:31.636 03:15:32 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:31.636 03:15:32 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:33.539 Calling clear_iscsi_subsystem 00:04:33.539 Calling clear_nvmf_subsystem 00:04:33.539 Calling clear_nbd_subsystem 00:04:33.539 Calling clear_ublk_subsystem 00:04:33.539 Calling clear_vhost_blk_subsystem 00:04:33.539 Calling clear_vhost_scsi_subsystem 00:04:33.539 Calling clear_bdev_subsystem 00:04:33.539 03:15:34 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:33.539 03:15:34 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:33.539 03:15:34 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:33.539 03:15:34 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:33.539 03:15:34 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:33.539 03:15:34 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:33.539 03:15:34 json_config -- json_config/json_config.sh@352 -- # break 00:04:33.539 03:15:34 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:33.539 03:15:34 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:33.539 03:15:34 json_config -- json_config/common.sh@31 -- # local app=target 00:04:33.539 03:15:34 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:33.539 03:15:34 json_config -- json_config/common.sh@35 -- # [[ -n 2460833 ]] 00:04:33.539 03:15:34 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2460833 00:04:33.539 03:15:34 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:33.539 03:15:34 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.539 03:15:34 json_config -- json_config/common.sh@41 -- # kill -0 2460833 00:04:33.539 03:15:34 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:34.110 03:15:35 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:34.110 03:15:35 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.110 03:15:35 json_config -- json_config/common.sh@41 -- # kill -0 2460833 00:04:34.110 03:15:35 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:34.682 03:15:35 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:34.682 03:15:35 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.682 03:15:35 json_config -- json_config/common.sh@41 -- # kill -0 2460833 00:04:34.682 03:15:35 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:34.682 03:15:35 json_config -- json_config/common.sh@43 -- # break 00:04:34.682 03:15:35 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:34.682 03:15:35 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:34.682 SPDK target shutdown done 00:04:34.682 03:15:35 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:34.682 INFO: relaunching applications... 00:04:34.682 03:15:35 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:34.682 03:15:35 json_config -- json_config/common.sh@9 -- # local app=target 00:04:34.682 03:15:35 json_config -- json_config/common.sh@10 -- # shift 00:04:34.682 03:15:35 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:34.682 03:15:35 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:34.682 03:15:35 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:34.682 03:15:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:34.682 03:15:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:34.682 03:15:35 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2462533 00:04:34.682 03:15:35 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:34.682 Waiting for target to run... 00:04:34.682 03:15:35 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:34.682 03:15:35 json_config -- json_config/common.sh@25 -- # waitforlisten 2462533 /var/tmp/spdk_tgt.sock 00:04:34.682 03:15:35 json_config -- common/autotest_common.sh@835 -- # '[' -z 2462533 ']' 00:04:34.682 03:15:35 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:34.682 03:15:35 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.682 03:15:35 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:34.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:34.682 03:15:35 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.682 03:15:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.682 [2024-12-13 03:15:35.719932] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:34.683 [2024-12-13 03:15:35.720018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2462533 ] 00:04:35.249 [2024-12-13 03:15:36.214923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.249 [2024-12-13 03:15:36.320127] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.435 [2024-12-13 03:15:39.978381] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:39.435 [2024-12-13 03:15:40.010735] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:39.435 03:15:40 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.435 03:15:40 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:39.435 03:15:40 json_config -- json_config/common.sh@26 -- # echo '' 00:04:39.435 00:04:39.435 03:15:40 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:39.435 03:15:40 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:39.435 INFO: Checking if target configuration is the same... 00:04:39.435 03:15:40 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:39.435 03:15:40 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:39.435 03:15:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:39.435 + '[' 2 -ne 2 ']' 00:04:39.435 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:39.435 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:39.435 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:39.435 +++ basename /dev/fd/62 00:04:39.435 ++ mktemp /tmp/62.XXX 00:04:39.435 + tmp_file_1=/tmp/62.o1R 00:04:39.435 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:39.435 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:39.435 + tmp_file_2=/tmp/spdk_tgt_config.json.HsE 00:04:39.435 + ret=0 00:04:39.435 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:39.435 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:39.435 + diff -u /tmp/62.o1R /tmp/spdk_tgt_config.json.HsE 00:04:39.435 + echo 'INFO: JSON config files are the same' 00:04:39.435 INFO: JSON config files are the same 00:04:39.435 + rm /tmp/62.o1R /tmp/spdk_tgt_config.json.HsE 00:04:39.435 + exit 0 00:04:39.435 03:15:40 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:39.435 03:15:40 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:39.435 INFO: changing configuration and checking if this can be detected... 00:04:39.435 03:15:40 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:39.435 03:15:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:39.435 03:15:40 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:39.435 03:15:40 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:39.435 03:15:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:39.435 + '[' 2 -ne 2 ']' 00:04:39.435 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:39.435 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:39.435 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:39.435 +++ basename /dev/fd/62 00:04:39.435 ++ mktemp /tmp/62.XXX 00:04:39.694 + tmp_file_1=/tmp/62.jX3 00:04:39.694 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:39.694 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:39.694 + tmp_file_2=/tmp/spdk_tgt_config.json.R2S 00:04:39.694 + ret=0 00:04:39.694 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:39.952 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:39.952 + diff -u /tmp/62.jX3 /tmp/spdk_tgt_config.json.R2S 00:04:39.952 + ret=1 00:04:39.952 + echo '=== Start of file: /tmp/62.jX3 ===' 00:04:39.952 + cat /tmp/62.jX3 00:04:39.952 + echo '=== End of file: /tmp/62.jX3 ===' 00:04:39.952 + echo '' 00:04:39.952 + echo '=== Start of file: /tmp/spdk_tgt_config.json.R2S ===' 00:04:39.952 + cat /tmp/spdk_tgt_config.json.R2S 00:04:39.952 + echo '=== End of file: /tmp/spdk_tgt_config.json.R2S ===' 00:04:39.952 + echo '' 00:04:39.952 + rm /tmp/62.jX3 /tmp/spdk_tgt_config.json.R2S 00:04:39.952 + exit 1 00:04:39.952 03:15:40 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:39.952 INFO: configuration change detected. 00:04:39.952 03:15:40 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:39.952 03:15:40 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:39.952 03:15:40 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:39.952 03:15:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.952 03:15:41 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:39.952 03:15:41 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:39.952 03:15:41 json_config -- json_config/json_config.sh@324 -- # [[ -n 2462533 ]] 00:04:39.952 03:15:41 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:39.952 03:15:41 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:39.952 03:15:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:39.952 03:15:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.952 03:15:41 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:39.952 03:15:41 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:39.952 03:15:41 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:39.952 03:15:41 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:39.952 03:15:41 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:39.952 03:15:41 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:39.952 03:15:41 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:39.952 03:15:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.952 03:15:41 json_config -- json_config/json_config.sh@330 -- # killprocess 2462533 00:04:39.952 03:15:41 json_config -- common/autotest_common.sh@954 -- # '[' -z 2462533 ']' 00:04:39.952 03:15:41 json_config -- common/autotest_common.sh@958 -- # kill -0 2462533 00:04:39.952 03:15:41 json_config -- common/autotest_common.sh@959 -- # uname 00:04:39.952 03:15:41 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.952 03:15:41 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2462533 00:04:39.952 03:15:41 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.952 03:15:41 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.952 03:15:41 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2462533' 00:04:39.952 killing process with pid 2462533 00:04:39.952 03:15:41 json_config -- common/autotest_common.sh@973 -- # kill 2462533 00:04:39.952 03:15:41 json_config -- common/autotest_common.sh@978 -- # wait 2462533 00:04:42.484 03:15:43 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:42.484 03:15:43 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:42.484 03:15:43 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:42.484 03:15:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.484 03:15:43 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:42.484 03:15:43 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:42.484 INFO: Success 00:04:42.484 00:04:42.484 real 0m17.677s 00:04:42.484 user 0m18.053s 00:04:42.484 sys 0m2.760s 00:04:42.484 03:15:43 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.484 03:15:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.484 ************************************ 00:04:42.484 END TEST json_config 00:04:42.484 ************************************ 00:04:42.484 03:15:43 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:42.484 03:15:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.484 03:15:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.484 03:15:43 -- common/autotest_common.sh@10 -- # set +x 00:04:42.484 ************************************ 00:04:42.484 START TEST json_config_extra_key 00:04:42.484 ************************************ 00:04:42.484 03:15:43 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:42.484 03:15:43 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:42.484 03:15:43 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:42.484 03:15:43 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:42.485 03:15:43 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:42.485 03:15:43 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.485 03:15:43 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.485 03:15:43 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.485 03:15:43 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.485 03:15:43 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.485 03:15:43 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.485 03:15:43 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.485 03:15:43 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.485 03:15:43 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.485 03:15:43 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.485 03:15:43 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.485 03:15:43 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:42.485 03:15:43 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:42.485 03:15:43 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.485 03:15:43 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.485 03:15:43 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:42.485 03:15:43 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:42.485 03:15:43 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.485 03:15:43 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:42.485 03:15:43 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.485 03:15:43 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:42.485 03:15:43 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:42.485 03:15:43 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.485 03:15:43 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:42.485 03:15:43 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.485 03:15:43 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.485 03:15:43 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.485 03:15:43 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:42.485 03:15:43 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.485 03:15:43 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:42.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.485 --rc genhtml_branch_coverage=1 00:04:42.485 --rc genhtml_function_coverage=1 00:04:42.485 --rc genhtml_legend=1 00:04:42.485 --rc geninfo_all_blocks=1 00:04:42.485 --rc geninfo_unexecuted_blocks=1 00:04:42.485 00:04:42.485 ' 00:04:42.485 03:15:43 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:42.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.485 --rc genhtml_branch_coverage=1 00:04:42.485 --rc genhtml_function_coverage=1 00:04:42.485 --rc genhtml_legend=1 00:04:42.485 --rc geninfo_all_blocks=1 00:04:42.485 --rc geninfo_unexecuted_blocks=1 00:04:42.485 00:04:42.485 ' 00:04:42.485 03:15:43 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:42.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.485 --rc genhtml_branch_coverage=1 00:04:42.485 --rc genhtml_function_coverage=1 00:04:42.485 --rc genhtml_legend=1 00:04:42.485 --rc geninfo_all_blocks=1 00:04:42.485 --rc geninfo_unexecuted_blocks=1 00:04:42.485 00:04:42.485 ' 00:04:42.485 03:15:43 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:42.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.485 --rc genhtml_branch_coverage=1 00:04:42.485 --rc genhtml_function_coverage=1 00:04:42.485 --rc genhtml_legend=1 00:04:42.485 --rc geninfo_all_blocks=1 00:04:42.485 --rc geninfo_unexecuted_blocks=1 00:04:42.485 00:04:42.485 ' 00:04:42.485 03:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:42.485 03:15:43 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:42.485 03:15:43 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:42.485 03:15:43 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:42.485 03:15:43 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:42.485 03:15:43 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:42.485 03:15:43 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:42.485 03:15:43 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:42.485 03:15:43 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:42.485 03:15:43 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:42.485 03:15:43 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:42.485 03:15:43 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:42.485 03:15:43 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:04:42.485 03:15:43 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:04:42.485 03:15:43 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:42.485 03:15:43 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:42.485 03:15:43 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:42.485 03:15:43 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:42.485 03:15:43 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:42.485 03:15:43 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:42.485 03:15:43 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:42.485 03:15:43 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:42.485 03:15:43 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:42.485 03:15:43 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.485 03:15:43 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.485 03:15:43 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.485 03:15:43 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:42.485 03:15:43 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.485 03:15:43 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:42.485 03:15:43 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:42.485 03:15:43 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:42.485 03:15:43 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:42.485 03:15:43 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:42.485 03:15:43 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:42.485 03:15:43 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:42.485 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:42.485 03:15:43 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:42.485 03:15:43 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:42.485 03:15:43 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:42.485 03:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:42.485 03:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:42.485 03:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:42.485 03:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:42.485 03:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:42.485 03:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:42.485 03:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:42.485 03:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:42.485 03:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:42.485 03:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:42.485 03:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:42.485 INFO: launching applications... 00:04:42.485 03:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:42.485 03:15:43 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:42.485 03:15:43 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:42.485 03:15:43 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:42.485 03:15:43 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:42.485 03:15:43 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:42.485 03:15:43 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.485 03:15:43 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.485 03:15:43 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2464002 00:04:42.485 03:15:43 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:42.485 Waiting for target to run... 00:04:42.485 03:15:43 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2464002 /var/tmp/spdk_tgt.sock 00:04:42.485 03:15:43 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2464002 ']' 00:04:42.486 03:15:43 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:42.486 03:15:43 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:42.486 03:15:43 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.486 03:15:43 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:42.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:42.486 03:15:43 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.486 03:15:43 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:42.745 [2024-12-13 03:15:43.720644] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:42.745 [2024-12-13 03:15:43.720731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2464002 ] 00:04:43.311 [2024-12-13 03:15:44.224212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.311 [2024-12-13 03:15:44.325289] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.878 03:15:44 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.878 03:15:44 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:43.878 03:15:44 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:43.878 00:04:43.878 03:15:44 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:43.878 INFO: shutting down applications... 00:04:43.878 03:15:44 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:43.878 03:15:44 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:43.878 03:15:44 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:43.878 03:15:44 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2464002 ]] 00:04:43.878 03:15:44 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2464002 00:04:43.878 03:15:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:43.878 03:15:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:43.878 03:15:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2464002 00:04:43.878 03:15:44 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:44.444 03:15:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:44.444 03:15:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:44.444 03:15:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2464002 00:04:44.444 03:15:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:45.019 03:15:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:45.019 03:15:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:45.019 03:15:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2464002 00:04:45.019 03:15:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:45.585 03:15:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:45.585 03:15:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:45.585 03:15:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2464002 00:04:45.585 03:15:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:45.842 03:15:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:45.842 03:15:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:45.842 03:15:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2464002 00:04:45.842 03:15:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:46.409 03:15:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:46.409 03:15:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.409 03:15:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2464002 00:04:46.409 03:15:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:46.976 03:15:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:46.976 03:15:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.976 03:15:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2464002 00:04:46.976 03:15:48 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:46.976 03:15:48 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:46.976 03:15:48 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:46.976 03:15:48 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:46.976 SPDK target shutdown done 00:04:46.976 03:15:48 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:46.976 Success 00:04:46.976 00:04:46.976 real 0m4.581s 00:04:46.976 user 0m3.837s 00:04:46.976 sys 0m0.708s 00:04:46.976 03:15:48 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.976 03:15:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:46.976 ************************************ 00:04:46.976 END TEST json_config_extra_key 00:04:46.976 ************************************ 00:04:46.976 03:15:48 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:46.976 03:15:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.976 03:15:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.976 03:15:48 -- common/autotest_common.sh@10 -- # set +x 00:04:46.976 ************************************ 00:04:46.976 START TEST alias_rpc 00:04:46.976 ************************************ 00:04:46.976 03:15:48 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:46.976 * Looking for test storage... 00:04:47.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:47.235 03:15:48 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:47.235 03:15:48 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:47.235 03:15:48 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:47.235 03:15:48 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:47.235 03:15:48 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.235 03:15:48 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.235 03:15:48 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.235 03:15:48 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.235 03:15:48 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.235 03:15:48 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.235 03:15:48 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.235 03:15:48 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.235 03:15:48 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.235 03:15:48 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.235 03:15:48 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.235 03:15:48 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:47.235 03:15:48 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:47.235 03:15:48 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.235 03:15:48 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.235 03:15:48 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:47.235 03:15:48 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:47.235 03:15:48 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.235 03:15:48 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:47.235 03:15:48 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.235 03:15:48 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:47.235 03:15:48 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:47.235 03:15:48 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.235 03:15:48 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:47.235 03:15:48 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.235 03:15:48 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.235 03:15:48 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.235 03:15:48 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:47.235 03:15:48 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.235 03:15:48 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:47.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.235 --rc genhtml_branch_coverage=1 00:04:47.235 --rc genhtml_function_coverage=1 00:04:47.235 --rc genhtml_legend=1 00:04:47.235 --rc geninfo_all_blocks=1 00:04:47.235 --rc geninfo_unexecuted_blocks=1 00:04:47.235 00:04:47.235 ' 00:04:47.235 03:15:48 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:47.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.235 --rc genhtml_branch_coverage=1 00:04:47.235 --rc genhtml_function_coverage=1 00:04:47.235 --rc genhtml_legend=1 00:04:47.235 --rc geninfo_all_blocks=1 00:04:47.235 --rc geninfo_unexecuted_blocks=1 00:04:47.235 00:04:47.235 ' 00:04:47.235 03:15:48 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:47.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.235 --rc genhtml_branch_coverage=1 00:04:47.235 --rc genhtml_function_coverage=1 00:04:47.235 --rc genhtml_legend=1 00:04:47.235 --rc geninfo_all_blocks=1 00:04:47.235 --rc geninfo_unexecuted_blocks=1 00:04:47.235 00:04:47.235 ' 00:04:47.235 03:15:48 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:47.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.235 --rc genhtml_branch_coverage=1 00:04:47.235 --rc genhtml_function_coverage=1 00:04:47.235 --rc genhtml_legend=1 00:04:47.235 --rc geninfo_all_blocks=1 00:04:47.235 --rc geninfo_unexecuted_blocks=1 00:04:47.235 00:04:47.235 ' 00:04:47.235 03:15:48 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:47.235 03:15:48 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:47.235 03:15:48 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2464815 00:04:47.235 03:15:48 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2464815 00:04:47.235 03:15:48 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2464815 ']' 00:04:47.235 03:15:48 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.235 03:15:48 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.235 03:15:48 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.235 03:15:48 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.235 03:15:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.235 [2024-12-13 03:15:48.348566] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:47.235 [2024-12-13 03:15:48.348688] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2464815 ] 00:04:47.494 [2024-12-13 03:15:48.460485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.494 [2024-12-13 03:15:48.563993] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.428 03:15:49 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.428 03:15:49 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:48.428 03:15:49 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:48.428 03:15:49 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2464815 00:04:48.428 03:15:49 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2464815 ']' 00:04:48.428 03:15:49 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2464815 00:04:48.428 03:15:49 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:48.428 03:15:49 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.428 03:15:49 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2464815 00:04:48.428 03:15:49 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:48.428 03:15:49 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:48.428 03:15:49 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2464815' 00:04:48.428 killing process with pid 2464815 00:04:48.428 03:15:49 alias_rpc -- common/autotest_common.sh@973 -- # kill 2464815 00:04:48.428 03:15:49 alias_rpc -- common/autotest_common.sh@978 -- # wait 2464815 00:04:50.958 00:04:50.958 real 0m3.851s 00:04:50.958 user 0m3.915s 00:04:50.958 sys 0m0.539s 00:04:50.958 03:15:51 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.958 03:15:51 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.958 ************************************ 00:04:50.958 END TEST alias_rpc 00:04:50.958 ************************************ 00:04:50.958 03:15:51 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:50.958 03:15:51 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:50.958 03:15:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.958 03:15:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.958 03:15:51 -- common/autotest_common.sh@10 -- # set +x 00:04:50.958 ************************************ 00:04:50.958 START TEST spdkcli_tcp 00:04:50.958 ************************************ 00:04:50.958 03:15:52 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:50.958 * Looking for test storage... 00:04:50.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:50.958 03:15:52 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:50.958 03:15:52 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:50.958 03:15:52 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:51.216 03:15:52 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:51.216 03:15:52 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.216 03:15:52 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.216 03:15:52 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.216 03:15:52 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.216 03:15:52 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.216 03:15:52 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.217 03:15:52 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.217 03:15:52 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.217 03:15:52 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.217 03:15:52 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.217 03:15:52 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.217 03:15:52 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:51.217 03:15:52 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:51.217 03:15:52 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.217 03:15:52 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.217 03:15:52 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:51.217 03:15:52 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:51.217 03:15:52 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.217 03:15:52 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:51.217 03:15:52 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.217 03:15:52 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:51.217 03:15:52 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:51.217 03:15:52 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.217 03:15:52 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:51.217 03:15:52 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.217 03:15:52 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.217 03:15:52 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.217 03:15:52 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:51.217 03:15:52 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.217 03:15:52 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:51.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.217 --rc genhtml_branch_coverage=1 00:04:51.217 --rc genhtml_function_coverage=1 00:04:51.217 --rc genhtml_legend=1 00:04:51.217 --rc geninfo_all_blocks=1 00:04:51.217 --rc geninfo_unexecuted_blocks=1 00:04:51.217 00:04:51.217 ' 00:04:51.217 03:15:52 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:51.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.217 --rc genhtml_branch_coverage=1 00:04:51.217 --rc genhtml_function_coverage=1 00:04:51.217 --rc genhtml_legend=1 00:04:51.217 --rc geninfo_all_blocks=1 00:04:51.217 --rc geninfo_unexecuted_blocks=1 00:04:51.217 00:04:51.217 ' 00:04:51.217 03:15:52 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:51.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.217 --rc genhtml_branch_coverage=1 00:04:51.217 --rc genhtml_function_coverage=1 00:04:51.217 --rc genhtml_legend=1 00:04:51.217 --rc geninfo_all_blocks=1 00:04:51.217 --rc geninfo_unexecuted_blocks=1 00:04:51.217 00:04:51.217 ' 00:04:51.217 03:15:52 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:51.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.217 --rc genhtml_branch_coverage=1 00:04:51.217 --rc genhtml_function_coverage=1 00:04:51.217 --rc genhtml_legend=1 00:04:51.217 --rc geninfo_all_blocks=1 00:04:51.217 --rc geninfo_unexecuted_blocks=1 00:04:51.217 00:04:51.217 ' 00:04:51.217 03:15:52 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:51.217 03:15:52 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:51.217 03:15:52 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:51.217 03:15:52 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:51.217 03:15:52 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:51.217 03:15:52 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:51.217 03:15:52 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:51.217 03:15:52 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:51.217 03:15:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:51.217 03:15:52 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2465487 00:04:51.217 03:15:52 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:51.217 03:15:52 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2465487 00:04:51.217 03:15:52 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2465487 ']' 00:04:51.217 03:15:52 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.217 03:15:52 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.217 03:15:52 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.217 03:15:52 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.217 03:15:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:51.217 [2024-12-13 03:15:52.292179] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:51.217 [2024-12-13 03:15:52.292290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2465487 ] 00:04:51.217 [2024-12-13 03:15:52.407234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:51.476 [2024-12-13 03:15:52.514682] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.476 [2024-12-13 03:15:52.514689] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.411 03:15:53 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.411 03:15:53 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:52.411 03:15:53 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:52.411 03:15:53 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2465709 00:04:52.411 03:15:53 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:52.411 [ 00:04:52.411 "bdev_malloc_delete", 00:04:52.411 "bdev_malloc_create", 00:04:52.411 "bdev_null_resize", 00:04:52.411 "bdev_null_delete", 00:04:52.411 "bdev_null_create", 00:04:52.411 "bdev_nvme_cuse_unregister", 00:04:52.411 "bdev_nvme_cuse_register", 00:04:52.411 "bdev_opal_new_user", 00:04:52.411 "bdev_opal_set_lock_state", 00:04:52.411 "bdev_opal_delete", 00:04:52.411 "bdev_opal_get_info", 00:04:52.411 "bdev_opal_create", 00:04:52.411 "bdev_nvme_opal_revert", 00:04:52.411 "bdev_nvme_opal_init", 00:04:52.411 "bdev_nvme_send_cmd", 00:04:52.411 "bdev_nvme_set_keys", 00:04:52.411 "bdev_nvme_get_path_iostat", 00:04:52.411 "bdev_nvme_get_mdns_discovery_info", 00:04:52.411 "bdev_nvme_stop_mdns_discovery", 00:04:52.411 "bdev_nvme_start_mdns_discovery", 00:04:52.411 "bdev_nvme_set_multipath_policy", 00:04:52.411 "bdev_nvme_set_preferred_path", 00:04:52.411 "bdev_nvme_get_io_paths", 00:04:52.411 "bdev_nvme_remove_error_injection", 00:04:52.411 "bdev_nvme_add_error_injection", 00:04:52.411 "bdev_nvme_get_discovery_info", 00:04:52.411 "bdev_nvme_stop_discovery", 00:04:52.411 "bdev_nvme_start_discovery", 00:04:52.411 "bdev_nvme_get_controller_health_info", 00:04:52.411 "bdev_nvme_disable_controller", 00:04:52.411 "bdev_nvme_enable_controller", 00:04:52.411 "bdev_nvme_reset_controller", 00:04:52.411 "bdev_nvme_get_transport_statistics", 00:04:52.411 "bdev_nvme_apply_firmware", 00:04:52.411 "bdev_nvme_detach_controller", 00:04:52.411 "bdev_nvme_get_controllers", 00:04:52.411 "bdev_nvme_attach_controller", 00:04:52.411 "bdev_nvme_set_hotplug", 00:04:52.411 "bdev_nvme_set_options", 00:04:52.411 "bdev_passthru_delete", 00:04:52.411 "bdev_passthru_create", 00:04:52.411 "bdev_lvol_set_parent_bdev", 00:04:52.411 "bdev_lvol_set_parent", 00:04:52.411 "bdev_lvol_check_shallow_copy", 00:04:52.411 "bdev_lvol_start_shallow_copy", 00:04:52.411 "bdev_lvol_grow_lvstore", 00:04:52.411 "bdev_lvol_get_lvols", 00:04:52.411 "bdev_lvol_get_lvstores", 00:04:52.411 "bdev_lvol_delete", 00:04:52.411 "bdev_lvol_set_read_only", 00:04:52.411 "bdev_lvol_resize", 00:04:52.411 "bdev_lvol_decouple_parent", 00:04:52.411 "bdev_lvol_inflate", 00:04:52.411 "bdev_lvol_rename", 00:04:52.411 "bdev_lvol_clone_bdev", 00:04:52.411 "bdev_lvol_clone", 00:04:52.411 "bdev_lvol_snapshot", 00:04:52.411 "bdev_lvol_create", 00:04:52.411 "bdev_lvol_delete_lvstore", 00:04:52.411 "bdev_lvol_rename_lvstore", 00:04:52.411 "bdev_lvol_create_lvstore", 00:04:52.411 "bdev_raid_set_options", 00:04:52.411 "bdev_raid_remove_base_bdev", 00:04:52.411 "bdev_raid_add_base_bdev", 00:04:52.411 "bdev_raid_delete", 00:04:52.411 "bdev_raid_create", 00:04:52.411 "bdev_raid_get_bdevs", 00:04:52.411 "bdev_error_inject_error", 00:04:52.411 "bdev_error_delete", 00:04:52.411 "bdev_error_create", 00:04:52.411 "bdev_split_delete", 00:04:52.411 "bdev_split_create", 00:04:52.411 "bdev_delay_delete", 00:04:52.411 "bdev_delay_create", 00:04:52.411 "bdev_delay_update_latency", 00:04:52.411 "bdev_zone_block_delete", 00:04:52.411 "bdev_zone_block_create", 00:04:52.411 "blobfs_create", 00:04:52.411 "blobfs_detect", 00:04:52.411 "blobfs_set_cache_size", 00:04:52.411 "bdev_aio_delete", 00:04:52.411 "bdev_aio_rescan", 00:04:52.411 "bdev_aio_create", 00:04:52.411 "bdev_ftl_set_property", 00:04:52.411 "bdev_ftl_get_properties", 00:04:52.411 "bdev_ftl_get_stats", 00:04:52.411 "bdev_ftl_unmap", 00:04:52.411 "bdev_ftl_unload", 00:04:52.411 "bdev_ftl_delete", 00:04:52.411 "bdev_ftl_load", 00:04:52.411 "bdev_ftl_create", 00:04:52.411 "bdev_virtio_attach_controller", 00:04:52.411 "bdev_virtio_scsi_get_devices", 00:04:52.411 "bdev_virtio_detach_controller", 00:04:52.411 "bdev_virtio_blk_set_hotplug", 00:04:52.411 "bdev_iscsi_delete", 00:04:52.411 "bdev_iscsi_create", 00:04:52.411 "bdev_iscsi_set_options", 00:04:52.412 "accel_error_inject_error", 00:04:52.412 "ioat_scan_accel_module", 00:04:52.412 "dsa_scan_accel_module", 00:04:52.412 "iaa_scan_accel_module", 00:04:52.412 "keyring_file_remove_key", 00:04:52.412 "keyring_file_add_key", 00:04:52.412 "keyring_linux_set_options", 00:04:52.412 "fsdev_aio_delete", 00:04:52.412 "fsdev_aio_create", 00:04:52.412 "iscsi_get_histogram", 00:04:52.412 "iscsi_enable_histogram", 00:04:52.412 "iscsi_set_options", 00:04:52.412 "iscsi_get_auth_groups", 00:04:52.412 "iscsi_auth_group_remove_secret", 00:04:52.412 "iscsi_auth_group_add_secret", 00:04:52.412 "iscsi_delete_auth_group", 00:04:52.412 "iscsi_create_auth_group", 00:04:52.412 "iscsi_set_discovery_auth", 00:04:52.412 "iscsi_get_options", 00:04:52.412 "iscsi_target_node_request_logout", 00:04:52.412 "iscsi_target_node_set_redirect", 00:04:52.412 "iscsi_target_node_set_auth", 00:04:52.412 "iscsi_target_node_add_lun", 00:04:52.412 "iscsi_get_stats", 00:04:52.412 "iscsi_get_connections", 00:04:52.412 "iscsi_portal_group_set_auth", 00:04:52.412 "iscsi_start_portal_group", 00:04:52.412 "iscsi_delete_portal_group", 00:04:52.412 "iscsi_create_portal_group", 00:04:52.412 "iscsi_get_portal_groups", 00:04:52.412 "iscsi_delete_target_node", 00:04:52.412 "iscsi_target_node_remove_pg_ig_maps", 00:04:52.412 "iscsi_target_node_add_pg_ig_maps", 00:04:52.412 "iscsi_create_target_node", 00:04:52.412 "iscsi_get_target_nodes", 00:04:52.412 "iscsi_delete_initiator_group", 00:04:52.412 "iscsi_initiator_group_remove_initiators", 00:04:52.412 "iscsi_initiator_group_add_initiators", 00:04:52.412 "iscsi_create_initiator_group", 00:04:52.412 "iscsi_get_initiator_groups", 00:04:52.412 "nvmf_set_crdt", 00:04:52.412 "nvmf_set_config", 00:04:52.412 "nvmf_set_max_subsystems", 00:04:52.412 "nvmf_stop_mdns_prr", 00:04:52.412 "nvmf_publish_mdns_prr", 00:04:52.412 "nvmf_subsystem_get_listeners", 00:04:52.412 "nvmf_subsystem_get_qpairs", 00:04:52.412 "nvmf_subsystem_get_controllers", 00:04:52.412 "nvmf_get_stats", 00:04:52.412 "nvmf_get_transports", 00:04:52.412 "nvmf_create_transport", 00:04:52.412 "nvmf_get_targets", 00:04:52.412 "nvmf_delete_target", 00:04:52.412 "nvmf_create_target", 00:04:52.412 "nvmf_subsystem_allow_any_host", 00:04:52.412 "nvmf_subsystem_set_keys", 00:04:52.412 "nvmf_subsystem_remove_host", 00:04:52.412 "nvmf_subsystem_add_host", 00:04:52.412 "nvmf_ns_remove_host", 00:04:52.412 "nvmf_ns_add_host", 00:04:52.412 "nvmf_subsystem_remove_ns", 00:04:52.412 "nvmf_subsystem_set_ns_ana_group", 00:04:52.412 "nvmf_subsystem_add_ns", 00:04:52.412 "nvmf_subsystem_listener_set_ana_state", 00:04:52.412 "nvmf_discovery_get_referrals", 00:04:52.412 "nvmf_discovery_remove_referral", 00:04:52.412 "nvmf_discovery_add_referral", 00:04:52.412 "nvmf_subsystem_remove_listener", 00:04:52.412 "nvmf_subsystem_add_listener", 00:04:52.412 "nvmf_delete_subsystem", 00:04:52.412 "nvmf_create_subsystem", 00:04:52.412 "nvmf_get_subsystems", 00:04:52.412 "env_dpdk_get_mem_stats", 00:04:52.412 "nbd_get_disks", 00:04:52.412 "nbd_stop_disk", 00:04:52.412 "nbd_start_disk", 00:04:52.412 "ublk_recover_disk", 00:04:52.412 "ublk_get_disks", 00:04:52.412 "ublk_stop_disk", 00:04:52.412 "ublk_start_disk", 00:04:52.412 "ublk_destroy_target", 00:04:52.412 "ublk_create_target", 00:04:52.412 "virtio_blk_create_transport", 00:04:52.412 "virtio_blk_get_transports", 00:04:52.412 "vhost_controller_set_coalescing", 00:04:52.412 "vhost_get_controllers", 00:04:52.412 "vhost_delete_controller", 00:04:52.412 "vhost_create_blk_controller", 00:04:52.412 "vhost_scsi_controller_remove_target", 00:04:52.412 "vhost_scsi_controller_add_target", 00:04:52.412 "vhost_start_scsi_controller", 00:04:52.412 "vhost_create_scsi_controller", 00:04:52.412 "thread_set_cpumask", 00:04:52.412 "scheduler_set_options", 00:04:52.412 "framework_get_governor", 00:04:52.412 "framework_get_scheduler", 00:04:52.412 "framework_set_scheduler", 00:04:52.412 "framework_get_reactors", 00:04:52.412 "thread_get_io_channels", 00:04:52.412 "thread_get_pollers", 00:04:52.412 "thread_get_stats", 00:04:52.412 "framework_monitor_context_switch", 00:04:52.412 "spdk_kill_instance", 00:04:52.412 "log_enable_timestamps", 00:04:52.412 "log_get_flags", 00:04:52.412 "log_clear_flag", 00:04:52.412 "log_set_flag", 00:04:52.412 "log_get_level", 00:04:52.412 "log_set_level", 00:04:52.412 "log_get_print_level", 00:04:52.412 "log_set_print_level", 00:04:52.412 "framework_enable_cpumask_locks", 00:04:52.412 "framework_disable_cpumask_locks", 00:04:52.412 "framework_wait_init", 00:04:52.412 "framework_start_init", 00:04:52.412 "scsi_get_devices", 00:04:52.412 "bdev_get_histogram", 00:04:52.412 "bdev_enable_histogram", 00:04:52.412 "bdev_set_qos_limit", 00:04:52.412 "bdev_set_qd_sampling_period", 00:04:52.412 "bdev_get_bdevs", 00:04:52.412 "bdev_reset_iostat", 00:04:52.412 "bdev_get_iostat", 00:04:52.412 "bdev_examine", 00:04:52.412 "bdev_wait_for_examine", 00:04:52.412 "bdev_set_options", 00:04:52.412 "accel_get_stats", 00:04:52.412 "accel_set_options", 00:04:52.412 "accel_set_driver", 00:04:52.412 "accel_crypto_key_destroy", 00:04:52.412 "accel_crypto_keys_get", 00:04:52.412 "accel_crypto_key_create", 00:04:52.412 "accel_assign_opc", 00:04:52.412 "accel_get_module_info", 00:04:52.412 "accel_get_opc_assignments", 00:04:52.412 "vmd_rescan", 00:04:52.412 "vmd_remove_device", 00:04:52.412 "vmd_enable", 00:04:52.412 "sock_get_default_impl", 00:04:52.412 "sock_set_default_impl", 00:04:52.412 "sock_impl_set_options", 00:04:52.412 "sock_impl_get_options", 00:04:52.412 "iobuf_get_stats", 00:04:52.412 "iobuf_set_options", 00:04:52.412 "keyring_get_keys", 00:04:52.412 "framework_get_pci_devices", 00:04:52.412 "framework_get_config", 00:04:52.412 "framework_get_subsystems", 00:04:52.412 "fsdev_set_opts", 00:04:52.412 "fsdev_get_opts", 00:04:52.412 "trace_get_info", 00:04:52.412 "trace_get_tpoint_group_mask", 00:04:52.412 "trace_disable_tpoint_group", 00:04:52.412 "trace_enable_tpoint_group", 00:04:52.412 "trace_clear_tpoint_mask", 00:04:52.412 "trace_set_tpoint_mask", 00:04:52.412 "notify_get_notifications", 00:04:52.412 "notify_get_types", 00:04:52.412 "spdk_get_version", 00:04:52.412 "rpc_get_methods" 00:04:52.412 ] 00:04:52.412 03:15:53 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:52.412 03:15:53 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:52.412 03:15:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:52.412 03:15:53 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:52.412 03:15:53 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2465487 00:04:52.412 03:15:53 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2465487 ']' 00:04:52.412 03:15:53 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2465487 00:04:52.412 03:15:53 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:52.412 03:15:53 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.412 03:15:53 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2465487 00:04:52.671 03:15:53 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.671 03:15:53 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.671 03:15:53 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2465487' 00:04:52.671 killing process with pid 2465487 00:04:52.671 03:15:53 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2465487 00:04:52.671 03:15:53 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2465487 00:04:55.208 00:04:55.208 real 0m4.001s 00:04:55.208 user 0m7.301s 00:04:55.208 sys 0m0.580s 00:04:55.208 03:15:56 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.208 03:15:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:55.208 ************************************ 00:04:55.208 END TEST spdkcli_tcp 00:04:55.208 ************************************ 00:04:55.208 03:15:56 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:55.208 03:15:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.208 03:15:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.208 03:15:56 -- common/autotest_common.sh@10 -- # set +x 00:04:55.208 ************************************ 00:04:55.208 START TEST dpdk_mem_utility 00:04:55.208 ************************************ 00:04:55.208 03:15:56 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:55.208 * Looking for test storage... 00:04:55.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:55.208 03:15:56 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:55.208 03:15:56 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:55.208 03:15:56 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:55.208 03:15:56 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:55.208 03:15:56 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.208 03:15:56 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.208 03:15:56 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.208 03:15:56 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.208 03:15:56 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.208 03:15:56 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.208 03:15:56 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.208 03:15:56 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.208 03:15:56 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.208 03:15:56 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.208 03:15:56 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.208 03:15:56 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:55.208 03:15:56 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:55.208 03:15:56 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.208 03:15:56 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.208 03:15:56 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:55.208 03:15:56 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:55.208 03:15:56 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.208 03:15:56 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:55.208 03:15:56 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.208 03:15:56 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:55.208 03:15:56 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:55.208 03:15:56 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.208 03:15:56 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:55.208 03:15:56 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.208 03:15:56 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.208 03:15:56 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.208 03:15:56 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:55.208 03:15:56 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.208 03:15:56 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:55.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.208 --rc genhtml_branch_coverage=1 00:04:55.208 --rc genhtml_function_coverage=1 00:04:55.208 --rc genhtml_legend=1 00:04:55.208 --rc geninfo_all_blocks=1 00:04:55.208 --rc geninfo_unexecuted_blocks=1 00:04:55.208 00:04:55.208 ' 00:04:55.208 03:15:56 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:55.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.208 --rc genhtml_branch_coverage=1 00:04:55.208 --rc genhtml_function_coverage=1 00:04:55.208 --rc genhtml_legend=1 00:04:55.208 --rc geninfo_all_blocks=1 00:04:55.208 --rc geninfo_unexecuted_blocks=1 00:04:55.208 00:04:55.208 ' 00:04:55.208 03:15:56 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:55.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.208 --rc genhtml_branch_coverage=1 00:04:55.208 --rc genhtml_function_coverage=1 00:04:55.208 --rc genhtml_legend=1 00:04:55.208 --rc geninfo_all_blocks=1 00:04:55.208 --rc geninfo_unexecuted_blocks=1 00:04:55.208 00:04:55.208 ' 00:04:55.208 03:15:56 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:55.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.208 --rc genhtml_branch_coverage=1 00:04:55.208 --rc genhtml_function_coverage=1 00:04:55.208 --rc genhtml_legend=1 00:04:55.208 --rc geninfo_all_blocks=1 00:04:55.208 --rc geninfo_unexecuted_blocks=1 00:04:55.208 00:04:55.208 ' 00:04:55.208 03:15:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:55.208 03:15:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2466288 00:04:55.208 03:15:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2466288 00:04:55.208 03:15:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.208 03:15:56 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2466288 ']' 00:04:55.208 03:15:56 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.208 03:15:56 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.208 03:15:56 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.208 03:15:56 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.208 03:15:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:55.208 [2024-12-13 03:15:56.343200] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:55.208 [2024-12-13 03:15:56.343293] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2466288 ] 00:04:55.466 [2024-12-13 03:15:56.454781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.466 [2024-12-13 03:15:56.556924] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.402 03:15:57 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.402 03:15:57 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:56.402 03:15:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:56.402 03:15:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:56.402 03:15:57 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.402 03:15:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:56.402 { 00:04:56.402 "filename": "/tmp/spdk_mem_dump.txt" 00:04:56.402 } 00:04:56.402 03:15:57 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.402 03:15:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:56.402 DPDK memory size 824.000000 MiB in 1 heap(s) 00:04:56.402 1 heaps totaling size 824.000000 MiB 00:04:56.402 size: 824.000000 MiB heap id: 0 00:04:56.402 end heaps---------- 00:04:56.402 9 mempools totaling size 603.782043 MiB 00:04:56.402 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:56.402 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:56.402 size: 100.555481 MiB name: bdev_io_2466288 00:04:56.402 size: 50.003479 MiB name: msgpool_2466288 00:04:56.402 size: 36.509338 MiB name: fsdev_io_2466288 00:04:56.402 size: 21.763794 MiB name: PDU_Pool 00:04:56.402 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:56.402 size: 4.133484 MiB name: evtpool_2466288 00:04:56.402 size: 0.026123 MiB name: Session_Pool 00:04:56.402 end mempools------- 00:04:56.402 6 memzones totaling size 4.142822 MiB 00:04:56.402 size: 1.000366 MiB name: RG_ring_0_2466288 00:04:56.402 size: 1.000366 MiB name: RG_ring_1_2466288 00:04:56.402 size: 1.000366 MiB name: RG_ring_4_2466288 00:04:56.402 size: 1.000366 MiB name: RG_ring_5_2466288 00:04:56.402 size: 0.125366 MiB name: RG_ring_2_2466288 00:04:56.402 size: 0.015991 MiB name: RG_ring_3_2466288 00:04:56.402 end memzones------- 00:04:56.403 03:15:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:56.403 heap id: 0 total size: 824.000000 MiB number of busy elements: 44 number of free elements: 19 00:04:56.403 list of free elements. size: 16.847595 MiB 00:04:56.403 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:56.403 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:56.403 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:56.403 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:56.403 element at address: 0x200019900040 with size: 0.999939 MiB 00:04:56.403 element at address: 0x200019a00000 with size: 0.999329 MiB 00:04:56.403 element at address: 0x200000400000 with size: 0.998108 MiB 00:04:56.403 element at address: 0x200032600000 with size: 0.994324 MiB 00:04:56.403 element at address: 0x200019200000 with size: 0.959900 MiB 00:04:56.403 element at address: 0x200019d00040 with size: 0.937256 MiB 00:04:56.403 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:56.403 element at address: 0x20001b400000 with size: 0.583191 MiB 00:04:56.403 element at address: 0x200000c00000 with size: 0.495300 MiB 00:04:56.403 element at address: 0x200019600000 with size: 0.491150 MiB 00:04:56.403 element at address: 0x200019e00000 with size: 0.485657 MiB 00:04:56.403 element at address: 0x200012c00000 with size: 0.436157 MiB 00:04:56.403 element at address: 0x200028800000 with size: 0.411072 MiB 00:04:56.403 element at address: 0x200000800000 with size: 0.355286 MiB 00:04:56.403 element at address: 0x20000a5ff040 with size: 0.001038 MiB 00:04:56.403 list of standard malloc elements. size: 199.221497 MiB 00:04:56.403 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:56.403 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:56.403 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:56.403 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:56.403 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:04:56.403 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:56.403 element at address: 0x200019deff40 with size: 0.062683 MiB 00:04:56.403 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:56.403 element at address: 0x200012bff040 with size: 0.000427 MiB 00:04:56.403 element at address: 0x200012bffa00 with size: 0.000366 MiB 00:04:56.403 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:56.403 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:56.403 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:56.403 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:56.403 element at address: 0x2000004ffa40 with size: 0.000244 MiB 00:04:56.403 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:56.403 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:56.403 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:56.403 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:56.403 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:56.403 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:56.403 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:56.403 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:56.403 element at address: 0x20000a5ff480 with size: 0.000244 MiB 00:04:56.403 element at address: 0x20000a5ff580 with size: 0.000244 MiB 00:04:56.403 element at address: 0x20000a5ff680 with size: 0.000244 MiB 00:04:56.403 element at address: 0x20000a5ff780 with size: 0.000244 MiB 00:04:56.403 element at address: 0x20000a5ff880 with size: 0.000244 MiB 00:04:56.403 element at address: 0x20000a5ff980 with size: 0.000244 MiB 00:04:56.403 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:56.403 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:56.403 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:56.403 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:56.403 element at address: 0x200012bff200 with size: 0.000244 MiB 00:04:56.403 element at address: 0x200012bff300 with size: 0.000244 MiB 00:04:56.403 element at address: 0x200012bff400 with size: 0.000244 MiB 00:04:56.403 element at address: 0x200012bff500 with size: 0.000244 MiB 00:04:56.403 element at address: 0x200012bff600 with size: 0.000244 MiB 00:04:56.403 element at address: 0x200012bff700 with size: 0.000244 MiB 00:04:56.403 element at address: 0x200012bff800 with size: 0.000244 MiB 00:04:56.403 element at address: 0x200012bff900 with size: 0.000244 MiB 00:04:56.403 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:56.403 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:56.403 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:56.403 list of memzone associated elements. size: 607.930908 MiB 00:04:56.403 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:04:56.403 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:56.403 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:04:56.403 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:56.403 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:04:56.403 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_2466288_0 00:04:56.403 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:56.403 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2466288_0 00:04:56.403 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:56.403 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2466288_0 00:04:56.403 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:04:56.403 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:56.403 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:04:56.403 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:56.403 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:56.403 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2466288_0 00:04:56.403 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:56.403 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2466288 00:04:56.403 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:56.403 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2466288 00:04:56.403 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:04:56.403 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:56.403 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:04:56.403 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:56.403 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:56.403 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:56.403 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:04:56.403 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:56.403 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:56.403 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2466288 00:04:56.403 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:56.403 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2466288 00:04:56.403 element at address: 0x200019affd40 with size: 1.000549 MiB 00:04:56.403 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2466288 00:04:56.403 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:04:56.403 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2466288 00:04:56.403 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:56.403 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2466288 00:04:56.403 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:56.403 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2466288 00:04:56.403 element at address: 0x20001967dbc0 with size: 0.500549 MiB 00:04:56.403 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:56.403 element at address: 0x200012c6fa80 with size: 0.500549 MiB 00:04:56.403 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:56.403 element at address: 0x200019e7c540 with size: 0.250549 MiB 00:04:56.403 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:56.403 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:56.403 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2466288 00:04:56.403 element at address: 0x20000085f180 with size: 0.125549 MiB 00:04:56.403 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2466288 00:04:56.403 element at address: 0x2000192f5bc0 with size: 0.031799 MiB 00:04:56.404 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:56.404 element at address: 0x2000288693c0 with size: 0.023804 MiB 00:04:56.404 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:56.404 element at address: 0x20000085af40 with size: 0.016174 MiB 00:04:56.404 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2466288 00:04:56.404 element at address: 0x20002886f540 with size: 0.002502 MiB 00:04:56.404 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:56.404 element at address: 0x2000004ffb40 with size: 0.000366 MiB 00:04:56.404 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2466288 00:04:56.404 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:56.404 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2466288 00:04:56.404 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:56.404 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2466288 00:04:56.404 element at address: 0x20000a5ffa80 with size: 0.000366 MiB 00:04:56.404 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:56.404 03:15:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:56.404 03:15:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2466288 00:04:56.404 03:15:57 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2466288 ']' 00:04:56.404 03:15:57 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2466288 00:04:56.404 03:15:57 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:56.404 03:15:57 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.404 03:15:57 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2466288 00:04:56.404 03:15:57 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.404 03:15:57 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.404 03:15:57 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2466288' 00:04:56.404 killing process with pid 2466288 00:04:56.404 03:15:57 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2466288 00:04:56.404 03:15:57 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2466288 00:04:58.938 00:04:58.938 real 0m3.721s 00:04:58.938 user 0m3.721s 00:04:58.938 sys 0m0.516s 00:04:58.938 03:15:59 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.938 03:15:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:58.938 ************************************ 00:04:58.938 END TEST dpdk_mem_utility 00:04:58.938 ************************************ 00:04:58.938 03:15:59 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:58.938 03:15:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.938 03:15:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.938 03:15:59 -- common/autotest_common.sh@10 -- # set +x 00:04:58.938 ************************************ 00:04:58.938 START TEST event 00:04:58.938 ************************************ 00:04:58.938 03:15:59 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:58.938 * Looking for test storage... 00:04:58.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:58.938 03:15:59 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:58.939 03:15:59 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:58.939 03:15:59 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:58.939 03:16:00 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:58.939 03:16:00 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.939 03:16:00 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.939 03:16:00 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.939 03:16:00 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.939 03:16:00 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.939 03:16:00 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.939 03:16:00 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.939 03:16:00 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.939 03:16:00 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.939 03:16:00 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.939 03:16:00 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.939 03:16:00 event -- scripts/common.sh@344 -- # case "$op" in 00:04:58.939 03:16:00 event -- scripts/common.sh@345 -- # : 1 00:04:58.939 03:16:00 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.939 03:16:00 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.939 03:16:00 event -- scripts/common.sh@365 -- # decimal 1 00:04:58.939 03:16:00 event -- scripts/common.sh@353 -- # local d=1 00:04:58.939 03:16:00 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.939 03:16:00 event -- scripts/common.sh@355 -- # echo 1 00:04:58.939 03:16:00 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.939 03:16:00 event -- scripts/common.sh@366 -- # decimal 2 00:04:58.939 03:16:00 event -- scripts/common.sh@353 -- # local d=2 00:04:58.939 03:16:00 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.939 03:16:00 event -- scripts/common.sh@355 -- # echo 2 00:04:58.939 03:16:00 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.939 03:16:00 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.939 03:16:00 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.939 03:16:00 event -- scripts/common.sh@368 -- # return 0 00:04:58.939 03:16:00 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.939 03:16:00 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:58.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.939 --rc genhtml_branch_coverage=1 00:04:58.939 --rc genhtml_function_coverage=1 00:04:58.939 --rc genhtml_legend=1 00:04:58.939 --rc geninfo_all_blocks=1 00:04:58.939 --rc geninfo_unexecuted_blocks=1 00:04:58.939 00:04:58.939 ' 00:04:58.939 03:16:00 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:58.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.939 --rc genhtml_branch_coverage=1 00:04:58.939 --rc genhtml_function_coverage=1 00:04:58.939 --rc genhtml_legend=1 00:04:58.939 --rc geninfo_all_blocks=1 00:04:58.939 --rc geninfo_unexecuted_blocks=1 00:04:58.939 00:04:58.939 ' 00:04:58.939 03:16:00 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:58.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.939 --rc genhtml_branch_coverage=1 00:04:58.939 --rc genhtml_function_coverage=1 00:04:58.939 --rc genhtml_legend=1 00:04:58.939 --rc geninfo_all_blocks=1 00:04:58.939 --rc geninfo_unexecuted_blocks=1 00:04:58.939 00:04:58.939 ' 00:04:58.939 03:16:00 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:58.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.939 --rc genhtml_branch_coverage=1 00:04:58.939 --rc genhtml_function_coverage=1 00:04:58.939 --rc genhtml_legend=1 00:04:58.939 --rc geninfo_all_blocks=1 00:04:58.939 --rc geninfo_unexecuted_blocks=1 00:04:58.939 00:04:58.939 ' 00:04:58.939 03:16:00 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:58.939 03:16:00 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:58.939 03:16:00 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:58.939 03:16:00 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:58.939 03:16:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.939 03:16:00 event -- common/autotest_common.sh@10 -- # set +x 00:04:58.939 ************************************ 00:04:58.939 START TEST event_perf 00:04:58.939 ************************************ 00:04:58.939 03:16:00 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:58.939 Running I/O for 1 seconds...[2024-12-13 03:16:00.126325] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:58.939 [2024-12-13 03:16:00.126396] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2466960 ] 00:04:59.198 [2024-12-13 03:16:00.241333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:59.198 [2024-12-13 03:16:00.359815] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.198 [2024-12-13 03:16:00.359888] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:04:59.198 [2024-12-13 03:16:00.360124] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.198 [2024-12-13 03:16:00.360133] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:00.572 Running I/O for 1 seconds... 00:05:00.572 lcore 0: 206280 00:05:00.572 lcore 1: 206279 00:05:00.572 lcore 2: 206279 00:05:00.572 lcore 3: 206279 00:05:00.572 done. 00:05:00.572 00:05:00.572 real 0m1.495s 00:05:00.572 user 0m4.358s 00:05:00.572 sys 0m0.133s 00:05:00.572 03:16:01 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.572 03:16:01 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:00.572 ************************************ 00:05:00.572 END TEST event_perf 00:05:00.572 ************************************ 00:05:00.572 03:16:01 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:00.572 03:16:01 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:00.572 03:16:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.572 03:16:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.572 ************************************ 00:05:00.572 START TEST event_reactor 00:05:00.572 ************************************ 00:05:00.572 03:16:01 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:00.572 [2024-12-13 03:16:01.678049] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:00.572 [2024-12-13 03:16:01.678122] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2467298 ] 00:05:00.830 [2024-12-13 03:16:01.788410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.830 [2024-12-13 03:16:01.894512] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.207 test_start 00:05:02.207 oneshot 00:05:02.207 tick 100 00:05:02.207 tick 100 00:05:02.207 tick 250 00:05:02.207 tick 100 00:05:02.207 tick 100 00:05:02.207 tick 250 00:05:02.207 tick 100 00:05:02.207 tick 500 00:05:02.207 tick 100 00:05:02.207 tick 100 00:05:02.207 tick 250 00:05:02.207 tick 100 00:05:02.207 tick 100 00:05:02.207 test_end 00:05:02.207 00:05:02.207 real 0m1.453s 00:05:02.207 user 0m1.339s 00:05:02.207 sys 0m0.108s 00:05:02.207 03:16:03 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.207 03:16:03 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:02.207 ************************************ 00:05:02.207 END TEST event_reactor 00:05:02.207 ************************************ 00:05:02.207 03:16:03 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:02.207 03:16:03 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:02.207 03:16:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.207 03:16:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.207 ************************************ 00:05:02.207 START TEST event_reactor_perf 00:05:02.207 ************************************ 00:05:02.207 03:16:03 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:02.207 [2024-12-13 03:16:03.204994] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:02.207 [2024-12-13 03:16:03.205067] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2467639 ] 00:05:02.207 [2024-12-13 03:16:03.313017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.466 [2024-12-13 03:16:03.418616] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.846 test_start 00:05:03.846 test_end 00:05:03.846 Performance: 398234 events per second 00:05:03.846 00:05:03.846 real 0m1.465s 00:05:03.846 user 0m1.331s 00:05:03.846 sys 0m0.127s 00:05:03.846 03:16:04 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.846 03:16:04 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:03.846 ************************************ 00:05:03.846 END TEST event_reactor_perf 00:05:03.846 ************************************ 00:05:03.846 03:16:04 event -- event/event.sh@49 -- # uname -s 00:05:03.846 03:16:04 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:03.846 03:16:04 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:03.846 03:16:04 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.846 03:16:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.846 03:16:04 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.846 ************************************ 00:05:03.846 START TEST event_scheduler 00:05:03.846 ************************************ 00:05:03.846 03:16:04 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:03.846 * Looking for test storage... 00:05:03.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:03.846 03:16:04 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:03.846 03:16:04 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:03.846 03:16:04 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:03.846 03:16:04 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:03.846 03:16:04 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.846 03:16:04 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.846 03:16:04 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.846 03:16:04 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.846 03:16:04 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.846 03:16:04 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.846 03:16:04 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.846 03:16:04 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.846 03:16:04 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.846 03:16:04 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.846 03:16:04 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.846 03:16:04 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:03.846 03:16:04 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:03.846 03:16:04 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.846 03:16:04 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.846 03:16:04 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:03.846 03:16:04 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:03.846 03:16:04 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.846 03:16:04 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:03.846 03:16:04 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.846 03:16:04 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:03.846 03:16:04 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:03.846 03:16:04 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.846 03:16:04 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:03.846 03:16:04 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.846 03:16:04 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.846 03:16:04 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.846 03:16:04 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:03.846 03:16:04 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.846 03:16:04 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:03.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.846 --rc genhtml_branch_coverage=1 00:05:03.846 --rc genhtml_function_coverage=1 00:05:03.846 --rc genhtml_legend=1 00:05:03.846 --rc geninfo_all_blocks=1 00:05:03.846 --rc geninfo_unexecuted_blocks=1 00:05:03.846 00:05:03.846 ' 00:05:03.846 03:16:04 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:03.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.846 --rc genhtml_branch_coverage=1 00:05:03.846 --rc genhtml_function_coverage=1 00:05:03.846 --rc genhtml_legend=1 00:05:03.846 --rc geninfo_all_blocks=1 00:05:03.846 --rc geninfo_unexecuted_blocks=1 00:05:03.846 00:05:03.846 ' 00:05:03.846 03:16:04 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:03.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.846 --rc genhtml_branch_coverage=1 00:05:03.846 --rc genhtml_function_coverage=1 00:05:03.846 --rc genhtml_legend=1 00:05:03.846 --rc geninfo_all_blocks=1 00:05:03.846 --rc geninfo_unexecuted_blocks=1 00:05:03.846 00:05:03.846 ' 00:05:03.846 03:16:04 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:03.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.846 --rc genhtml_branch_coverage=1 00:05:03.846 --rc genhtml_function_coverage=1 00:05:03.846 --rc genhtml_legend=1 00:05:03.846 --rc geninfo_all_blocks=1 00:05:03.846 --rc geninfo_unexecuted_blocks=1 00:05:03.846 00:05:03.846 ' 00:05:03.846 03:16:04 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:03.846 03:16:04 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:03.847 03:16:04 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2467953 00:05:03.847 03:16:04 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:03.847 03:16:04 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2467953 00:05:03.847 03:16:04 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2467953 ']' 00:05:03.847 03:16:04 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.847 03:16:04 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.847 03:16:04 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.847 03:16:04 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.847 03:16:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.847 [2024-12-13 03:16:04.925817] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:03.847 [2024-12-13 03:16:04.925902] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2467953 ] 00:05:03.847 [2024-12-13 03:16:05.034729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:04.103 [2024-12-13 03:16:05.152810] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.103 [2024-12-13 03:16:05.152880] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.103 [2024-12-13 03:16:05.152944] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:04.103 [2024-12-13 03:16:05.152955] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:04.670 03:16:05 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.670 03:16:05 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:04.670 03:16:05 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:04.670 03:16:05 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.670 03:16:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:04.670 [2024-12-13 03:16:05.763359] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:04.670 [2024-12-13 03:16:05.763387] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:04.670 [2024-12-13 03:16:05.763405] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:04.670 [2024-12-13 03:16:05.763415] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:04.670 [2024-12-13 03:16:05.763426] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:04.670 03:16:05 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.670 03:16:05 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:04.670 03:16:05 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.670 03:16:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:04.929 [2024-12-13 03:16:06.077812] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:04.929 03:16:06 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.929 03:16:06 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:04.929 03:16:06 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.929 03:16:06 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.929 03:16:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:04.929 ************************************ 00:05:04.929 START TEST scheduler_create_thread 00:05:04.929 ************************************ 00:05:04.929 03:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:04.929 03:16:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:04.929 03:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.929 03:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.929 2 00:05:04.929 03:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.929 03:16:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:04.929 03:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.929 03:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.187 3 00:05:05.187 03:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.187 03:16:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.188 4 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.188 5 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.188 6 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.188 7 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.188 8 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.188 9 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.188 10 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.188 03:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.565 03:16:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.565 03:16:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:06.565 03:16:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:06.565 03:16:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.565 03:16:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.941 03:16:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.941 00:05:07.941 real 0m2.625s 00:05:07.941 user 0m0.027s 00:05:07.941 sys 0m0.002s 00:05:07.941 03:16:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.941 03:16:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.941 ************************************ 00:05:07.941 END TEST scheduler_create_thread 00:05:07.941 ************************************ 00:05:07.941 03:16:08 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:07.941 03:16:08 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2467953 00:05:07.941 03:16:08 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2467953 ']' 00:05:07.941 03:16:08 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2467953 00:05:07.941 03:16:08 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:07.941 03:16:08 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.941 03:16:08 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2467953 00:05:07.941 03:16:08 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:07.941 03:16:08 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:07.941 03:16:08 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2467953' 00:05:07.941 killing process with pid 2467953 00:05:07.941 03:16:08 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2467953 00:05:07.941 03:16:08 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2467953 00:05:08.200 [2024-12-13 03:16:09.218062] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:09.577 00:05:09.577 real 0m5.682s 00:05:09.577 user 0m10.105s 00:05:09.577 sys 0m0.457s 00:05:09.577 03:16:10 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.577 03:16:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.577 ************************************ 00:05:09.577 END TEST event_scheduler 00:05:09.577 ************************************ 00:05:09.577 03:16:10 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:09.577 03:16:10 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:09.577 03:16:10 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.577 03:16:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.577 03:16:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:09.577 ************************************ 00:05:09.577 START TEST app_repeat 00:05:09.577 ************************************ 00:05:09.577 03:16:10 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:09.577 03:16:10 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.577 03:16:10 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.577 03:16:10 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:09.577 03:16:10 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.577 03:16:10 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:09.577 03:16:10 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:09.577 03:16:10 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:09.577 03:16:10 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2468902 00:05:09.577 03:16:10 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.577 03:16:10 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:09.577 03:16:10 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2468902' 00:05:09.577 Process app_repeat pid: 2468902 00:05:09.577 03:16:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:09.577 03:16:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:09.577 spdk_app_start Round 0 00:05:09.577 03:16:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2468902 /var/tmp/spdk-nbd.sock 00:05:09.577 03:16:10 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2468902 ']' 00:05:09.577 03:16:10 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:09.577 03:16:10 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.577 03:16:10 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:09.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:09.577 03:16:10 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.577 03:16:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:09.577 [2024-12-13 03:16:10.518476] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:09.577 [2024-12-13 03:16:10.518562] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2468902 ] 00:05:09.577 [2024-12-13 03:16:10.631762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:09.577 [2024-12-13 03:16:10.733823] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.577 [2024-12-13 03:16:10.733834] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.514 03:16:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.514 03:16:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:10.514 03:16:11 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.514 Malloc0 00:05:10.514 03:16:11 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.772 Malloc1 00:05:10.772 03:16:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.772 03:16:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.772 03:16:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.772 03:16:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:10.772 03:16:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.772 03:16:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:10.772 03:16:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.772 03:16:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.772 03:16:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.772 03:16:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:10.772 03:16:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.772 03:16:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:10.772 03:16:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:10.772 03:16:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:10.772 03:16:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.772 03:16:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:11.031 /dev/nbd0 00:05:11.031 03:16:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:11.031 03:16:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:11.031 03:16:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:11.031 03:16:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:11.031 03:16:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:11.031 03:16:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:11.031 03:16:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:11.031 03:16:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:11.031 03:16:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:11.031 03:16:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:11.031 03:16:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.031 1+0 records in 00:05:11.031 1+0 records out 00:05:11.031 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335684 s, 12.2 MB/s 00:05:11.031 03:16:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:11.031 03:16:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:11.031 03:16:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:11.031 03:16:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:11.031 03:16:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:11.031 03:16:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.031 03:16:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.031 03:16:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:11.290 /dev/nbd1 00:05:11.290 03:16:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:11.290 03:16:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:11.290 03:16:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:11.290 03:16:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:11.290 03:16:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:11.290 03:16:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:11.290 03:16:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:11.290 03:16:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:11.290 03:16:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:11.290 03:16:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:11.290 03:16:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.290 1+0 records in 00:05:11.290 1+0 records out 00:05:11.290 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000110043 s, 37.2 MB/s 00:05:11.290 03:16:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:11.290 03:16:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:11.290 03:16:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:11.290 03:16:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:11.290 03:16:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:11.290 03:16:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.290 03:16:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.290 03:16:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.290 03:16:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.290 03:16:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:11.549 { 00:05:11.549 "nbd_device": "/dev/nbd0", 00:05:11.549 "bdev_name": "Malloc0" 00:05:11.549 }, 00:05:11.549 { 00:05:11.549 "nbd_device": "/dev/nbd1", 00:05:11.549 "bdev_name": "Malloc1" 00:05:11.549 } 00:05:11.549 ]' 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:11.549 { 00:05:11.549 "nbd_device": "/dev/nbd0", 00:05:11.549 "bdev_name": "Malloc0" 00:05:11.549 }, 00:05:11.549 { 00:05:11.549 "nbd_device": "/dev/nbd1", 00:05:11.549 "bdev_name": "Malloc1" 00:05:11.549 } 00:05:11.549 ]' 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:11.549 /dev/nbd1' 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:11.549 /dev/nbd1' 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:11.549 256+0 records in 00:05:11.549 256+0 records out 00:05:11.549 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101746 s, 103 MB/s 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:11.549 256+0 records in 00:05:11.549 256+0 records out 00:05:11.549 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0160176 s, 65.5 MB/s 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:11.549 256+0 records in 00:05:11.549 256+0 records out 00:05:11.549 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0192988 s, 54.3 MB/s 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.549 03:16:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:11.808 03:16:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:11.808 03:16:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:11.808 03:16:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:11.808 03:16:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.808 03:16:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.808 03:16:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:11.808 03:16:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:11.808 03:16:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.808 03:16:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.808 03:16:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:12.066 03:16:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:12.066 03:16:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:12.066 03:16:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:12.066 03:16:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.066 03:16:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.066 03:16:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:12.066 03:16:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.066 03:16:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.066 03:16:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.066 03:16:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.066 03:16:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.066 03:16:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:12.066 03:16:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:12.066 03:16:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.325 03:16:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:12.325 03:16:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:12.325 03:16:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.325 03:16:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:12.325 03:16:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:12.325 03:16:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:12.325 03:16:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:12.325 03:16:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:12.325 03:16:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:12.325 03:16:13 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:12.584 03:16:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:13.964 [2024-12-13 03:16:14.884399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.964 [2024-12-13 03:16:14.984981] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.964 [2024-12-13 03:16:14.984981] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.224 [2024-12-13 03:16:15.177720] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:14.224 [2024-12-13 03:16:15.177767] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:15.600 03:16:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:15.600 03:16:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:15.600 spdk_app_start Round 1 00:05:15.600 03:16:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2468902 /var/tmp/spdk-nbd.sock 00:05:15.600 03:16:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2468902 ']' 00:05:15.600 03:16:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:15.600 03:16:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.600 03:16:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:15.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:15.600 03:16:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.600 03:16:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:15.858 03:16:16 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.858 03:16:16 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:15.858 03:16:16 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.117 Malloc0 00:05:16.117 03:16:17 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.376 Malloc1 00:05:16.376 03:16:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.376 03:16:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.376 03:16:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.376 03:16:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:16.376 03:16:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.376 03:16:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:16.376 03:16:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.376 03:16:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.376 03:16:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.376 03:16:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:16.376 03:16:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.376 03:16:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:16.376 03:16:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:16.376 03:16:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:16.376 03:16:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.376 03:16:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:16.376 /dev/nbd0 00:05:16.635 03:16:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:16.635 03:16:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:16.635 03:16:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:16.635 03:16:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:16.635 03:16:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:16.635 03:16:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:16.635 03:16:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:16.635 03:16:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:16.635 03:16:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:16.635 03:16:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:16.635 03:16:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:16.635 1+0 records in 00:05:16.635 1+0 records out 00:05:16.635 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00011831 s, 34.6 MB/s 00:05:16.635 03:16:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:16.635 03:16:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:16.635 03:16:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:16.635 03:16:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:16.635 03:16:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:16.635 03:16:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:16.635 03:16:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.635 03:16:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:16.635 /dev/nbd1 00:05:16.635 03:16:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:16.635 03:16:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:16.635 03:16:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:16.635 03:16:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:16.635 03:16:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:16.635 03:16:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:16.635 03:16:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:16.635 03:16:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:16.635 03:16:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:16.635 03:16:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:16.635 03:16:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:16.635 1+0 records in 00:05:16.635 1+0 records out 00:05:16.635 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210164 s, 19.5 MB/s 00:05:16.635 03:16:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:16.894 03:16:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:16.894 03:16:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:16.894 03:16:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:16.894 03:16:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:16.894 03:16:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:16.894 03:16:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.894 03:16:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.894 03:16:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.894 03:16:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.894 03:16:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:16.894 { 00:05:16.894 "nbd_device": "/dev/nbd0", 00:05:16.894 "bdev_name": "Malloc0" 00:05:16.894 }, 00:05:16.894 { 00:05:16.894 "nbd_device": "/dev/nbd1", 00:05:16.894 "bdev_name": "Malloc1" 00:05:16.894 } 00:05:16.894 ]' 00:05:16.894 03:16:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:16.894 { 00:05:16.894 "nbd_device": "/dev/nbd0", 00:05:16.894 "bdev_name": "Malloc0" 00:05:16.894 }, 00:05:16.894 { 00:05:16.894 "nbd_device": "/dev/nbd1", 00:05:16.894 "bdev_name": "Malloc1" 00:05:16.894 } 00:05:16.894 ]' 00:05:16.894 03:16:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.894 03:16:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:16.894 /dev/nbd1' 00:05:16.894 03:16:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:16.894 /dev/nbd1' 00:05:16.894 03:16:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.894 03:16:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:16.894 03:16:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:16.894 03:16:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:16.894 03:16:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:16.894 03:16:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:16.894 03:16:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.894 03:16:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:16.894 03:16:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:16.894 03:16:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:16.894 03:16:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:16.894 03:16:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:17.153 256+0 records in 00:05:17.153 256+0 records out 00:05:17.153 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010465 s, 100 MB/s 00:05:17.153 03:16:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.153 03:16:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:17.153 256+0 records in 00:05:17.153 256+0 records out 00:05:17.153 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0156866 s, 66.8 MB/s 00:05:17.153 03:16:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.153 03:16:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:17.153 256+0 records in 00:05:17.153 256+0 records out 00:05:17.153 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0195515 s, 53.6 MB/s 00:05:17.153 03:16:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:17.153 03:16:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.153 03:16:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.153 03:16:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:17.153 03:16:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.153 03:16:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:17.153 03:16:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:17.153 03:16:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.153 03:16:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:17.153 03:16:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.153 03:16:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:17.153 03:16:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.153 03:16:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:17.153 03:16:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.153 03:16:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.153 03:16:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:17.153 03:16:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:17.153 03:16:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.153 03:16:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:17.412 03:16:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:17.412 03:16:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:17.412 03:16:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:17.412 03:16:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:17.412 03:16:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:17.412 03:16:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:17.412 03:16:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:17.412 03:16:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:17.412 03:16:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.412 03:16:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:17.412 03:16:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:17.412 03:16:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:17.412 03:16:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:17.412 03:16:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:17.412 03:16:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:17.412 03:16:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:17.412 03:16:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:17.412 03:16:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:17.412 03:16:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.412 03:16:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.412 03:16:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.671 03:16:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:17.671 03:16:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:17.671 03:16:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.671 03:16:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:17.671 03:16:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.671 03:16:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:17.671 03:16:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:17.671 03:16:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:17.671 03:16:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:17.671 03:16:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:17.671 03:16:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:17.671 03:16:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:17.671 03:16:18 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:18.237 03:16:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:19.612 [2024-12-13 03:16:20.413089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:19.612 [2024-12-13 03:16:20.517237] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.612 [2024-12-13 03:16:20.517244] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.612 [2024-12-13 03:16:20.708094] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:19.612 [2024-12-13 03:16:20.708146] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:21.512 03:16:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:21.512 03:16:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:21.512 spdk_app_start Round 2 00:05:21.512 03:16:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2468902 /var/tmp/spdk-nbd.sock 00:05:21.512 03:16:22 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2468902 ']' 00:05:21.512 03:16:22 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:21.512 03:16:22 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.512 03:16:22 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:21.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:21.512 03:16:22 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.512 03:16:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:21.512 03:16:22 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.512 03:16:22 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:21.512 03:16:22 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.512 Malloc0 00:05:21.512 03:16:22 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.770 Malloc1 00:05:21.770 03:16:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.770 03:16:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.770 03:16:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.770 03:16:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:21.770 03:16:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.770 03:16:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:21.770 03:16:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.770 03:16:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.770 03:16:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.770 03:16:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:21.770 03:16:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.770 03:16:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:21.770 03:16:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:21.770 03:16:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:21.770 03:16:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.770 03:16:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:22.028 /dev/nbd0 00:05:22.028 03:16:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:22.028 03:16:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:22.028 03:16:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:22.028 03:16:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:22.028 03:16:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:22.028 03:16:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:22.028 03:16:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:22.028 03:16:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:22.028 03:16:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:22.028 03:16:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:22.028 03:16:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.028 1+0 records in 00:05:22.028 1+0 records out 00:05:22.028 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108803 s, 3.8 MB/s 00:05:22.028 03:16:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.028 03:16:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:22.028 03:16:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.028 03:16:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:22.028 03:16:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:22.028 03:16:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.028 03:16:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.028 03:16:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:22.286 /dev/nbd1 00:05:22.286 03:16:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:22.286 03:16:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:22.286 03:16:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:22.286 03:16:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:22.286 03:16:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:22.286 03:16:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:22.286 03:16:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:22.286 03:16:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:22.286 03:16:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:22.286 03:16:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:22.286 03:16:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.286 1+0 records in 00:05:22.286 1+0 records out 00:05:22.286 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207086 s, 19.8 MB/s 00:05:22.286 03:16:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.286 03:16:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:22.286 03:16:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.286 03:16:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:22.286 03:16:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:22.286 03:16:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.286 03:16:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.286 03:16:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.286 03:16:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.286 03:16:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.544 03:16:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:22.544 { 00:05:22.544 "nbd_device": "/dev/nbd0", 00:05:22.544 "bdev_name": "Malloc0" 00:05:22.544 }, 00:05:22.544 { 00:05:22.544 "nbd_device": "/dev/nbd1", 00:05:22.544 "bdev_name": "Malloc1" 00:05:22.544 } 00:05:22.544 ]' 00:05:22.544 03:16:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:22.544 { 00:05:22.544 "nbd_device": "/dev/nbd0", 00:05:22.544 "bdev_name": "Malloc0" 00:05:22.544 }, 00:05:22.544 { 00:05:22.544 "nbd_device": "/dev/nbd1", 00:05:22.544 "bdev_name": "Malloc1" 00:05:22.544 } 00:05:22.544 ]' 00:05:22.544 03:16:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.544 03:16:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:22.544 /dev/nbd1' 00:05:22.544 03:16:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:22.544 /dev/nbd1' 00:05:22.544 03:16:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.544 03:16:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:22.544 03:16:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:22.544 03:16:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:22.545 03:16:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:22.545 03:16:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:22.545 03:16:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.545 03:16:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.545 03:16:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:22.545 03:16:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.545 03:16:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:22.545 03:16:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:22.545 256+0 records in 00:05:22.545 256+0 records out 00:05:22.545 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101056 s, 104 MB/s 00:05:22.545 03:16:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.545 03:16:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:22.545 256+0 records in 00:05:22.545 256+0 records out 00:05:22.545 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0154903 s, 67.7 MB/s 00:05:22.545 03:16:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.545 03:16:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:22.545 256+0 records in 00:05:22.545 256+0 records out 00:05:22.545 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0192189 s, 54.6 MB/s 00:05:22.545 03:16:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:22.545 03:16:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.545 03:16:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.545 03:16:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:22.545 03:16:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.545 03:16:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:22.545 03:16:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:22.545 03:16:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.545 03:16:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:22.545 03:16:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.545 03:16:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:22.545 03:16:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.545 03:16:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:22.545 03:16:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.545 03:16:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.545 03:16:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:22.545 03:16:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:22.545 03:16:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.545 03:16:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:22.803 03:16:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:22.803 03:16:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:22.803 03:16:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:22.803 03:16:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.803 03:16:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.803 03:16:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:22.803 03:16:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:22.803 03:16:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.803 03:16:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.803 03:16:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:23.061 03:16:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:23.061 03:16:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:23.061 03:16:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:23.061 03:16:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.061 03:16:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.061 03:16:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:23.061 03:16:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.061 03:16:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.062 03:16:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.062 03:16:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.062 03:16:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.062 03:16:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:23.062 03:16:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:23.062 03:16:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.320 03:16:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:23.320 03:16:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:23.320 03:16:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.320 03:16:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:23.320 03:16:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:23.320 03:16:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:23.320 03:16:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:23.320 03:16:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:23.320 03:16:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:23.320 03:16:24 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:23.577 03:16:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:24.950 [2024-12-13 03:16:25.878831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.950 [2024-12-13 03:16:25.981467] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.950 [2024-12-13 03:16:25.981467] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.208 [2024-12-13 03:16:26.175432] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:25.208 [2024-12-13 03:16:26.175476] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:26.580 03:16:27 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2468902 /var/tmp/spdk-nbd.sock 00:05:26.580 03:16:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2468902 ']' 00:05:26.580 03:16:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:26.580 03:16:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.580 03:16:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:26.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:26.580 03:16:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.580 03:16:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:26.837 03:16:27 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.837 03:16:27 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:26.837 03:16:27 event.app_repeat -- event/event.sh@39 -- # killprocess 2468902 00:05:26.837 03:16:27 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2468902 ']' 00:05:26.837 03:16:27 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2468902 00:05:26.837 03:16:27 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:26.837 03:16:27 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:26.837 03:16:27 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2468902 00:05:26.837 03:16:27 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:26.837 03:16:27 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:26.837 03:16:27 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2468902' 00:05:26.837 killing process with pid 2468902 00:05:26.837 03:16:27 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2468902 00:05:26.837 03:16:27 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2468902 00:05:27.771 spdk_app_start is called in Round 0. 00:05:27.771 Shutdown signal received, stop current app iteration 00:05:27.771 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:05:27.771 spdk_app_start is called in Round 1. 00:05:27.771 Shutdown signal received, stop current app iteration 00:05:27.771 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:05:27.771 spdk_app_start is called in Round 2. 00:05:27.771 Shutdown signal received, stop current app iteration 00:05:27.771 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:05:27.771 spdk_app_start is called in Round 3. 00:05:27.771 Shutdown signal received, stop current app iteration 00:05:27.771 03:16:28 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:27.771 03:16:28 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:27.771 00:05:27.771 real 0m18.471s 00:05:27.771 user 0m38.983s 00:05:27.771 sys 0m2.594s 00:05:27.771 03:16:28 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.771 03:16:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:27.771 ************************************ 00:05:27.771 END TEST app_repeat 00:05:27.771 ************************************ 00:05:27.771 03:16:28 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:27.771 03:16:28 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:27.771 03:16:28 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.771 03:16:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.771 03:16:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.030 ************************************ 00:05:28.030 START TEST cpu_locks 00:05:28.030 ************************************ 00:05:28.030 03:16:29 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:28.030 * Looking for test storage... 00:05:28.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:28.030 03:16:29 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:28.030 03:16:29 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:28.030 03:16:29 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:28.030 03:16:29 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:28.030 03:16:29 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.030 03:16:29 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.030 03:16:29 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.030 03:16:29 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.030 03:16:29 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.030 03:16:29 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.030 03:16:29 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.030 03:16:29 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.030 03:16:29 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.030 03:16:29 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.030 03:16:29 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.030 03:16:29 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:28.030 03:16:29 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:28.030 03:16:29 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.030 03:16:29 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.030 03:16:29 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:28.030 03:16:29 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:28.030 03:16:29 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.030 03:16:29 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:28.030 03:16:29 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.030 03:16:29 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:28.030 03:16:29 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:28.030 03:16:29 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.030 03:16:29 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:28.030 03:16:29 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.030 03:16:29 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.030 03:16:29 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.030 03:16:29 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:28.030 03:16:29 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.030 03:16:29 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:28.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.030 --rc genhtml_branch_coverage=1 00:05:28.030 --rc genhtml_function_coverage=1 00:05:28.030 --rc genhtml_legend=1 00:05:28.030 --rc geninfo_all_blocks=1 00:05:28.030 --rc geninfo_unexecuted_blocks=1 00:05:28.030 00:05:28.030 ' 00:05:28.030 03:16:29 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:28.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.030 --rc genhtml_branch_coverage=1 00:05:28.030 --rc genhtml_function_coverage=1 00:05:28.030 --rc genhtml_legend=1 00:05:28.030 --rc geninfo_all_blocks=1 00:05:28.030 --rc geninfo_unexecuted_blocks=1 00:05:28.030 00:05:28.030 ' 00:05:28.030 03:16:29 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:28.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.030 --rc genhtml_branch_coverage=1 00:05:28.030 --rc genhtml_function_coverage=1 00:05:28.030 --rc genhtml_legend=1 00:05:28.030 --rc geninfo_all_blocks=1 00:05:28.030 --rc geninfo_unexecuted_blocks=1 00:05:28.030 00:05:28.030 ' 00:05:28.030 03:16:29 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:28.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.030 --rc genhtml_branch_coverage=1 00:05:28.030 --rc genhtml_function_coverage=1 00:05:28.030 --rc genhtml_legend=1 00:05:28.030 --rc geninfo_all_blocks=1 00:05:28.030 --rc geninfo_unexecuted_blocks=1 00:05:28.030 00:05:28.030 ' 00:05:28.030 03:16:29 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:28.030 03:16:29 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:28.031 03:16:29 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:28.031 03:16:29 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:28.031 03:16:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.031 03:16:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.031 03:16:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.031 ************************************ 00:05:28.031 START TEST default_locks 00:05:28.031 ************************************ 00:05:28.031 03:16:29 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:28.031 03:16:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2472288 00:05:28.031 03:16:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2472288 00:05:28.031 03:16:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.031 03:16:29 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2472288 ']' 00:05:28.031 03:16:29 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.031 03:16:29 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.031 03:16:29 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.031 03:16:29 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.031 03:16:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.289 [2024-12-13 03:16:29.289068] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:28.289 [2024-12-13 03:16:29.289151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2472288 ] 00:05:28.289 [2024-12-13 03:16:29.400129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.546 [2024-12-13 03:16:29.502549] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.480 03:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.480 03:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:29.480 03:16:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2472288 00:05:29.480 03:16:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2472288 00:05:29.480 03:16:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:29.480 lslocks: write error 00:05:29.480 03:16:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2472288 00:05:29.480 03:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2472288 ']' 00:05:29.480 03:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2472288 00:05:29.738 03:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:29.738 03:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.738 03:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2472288 00:05:29.738 03:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.738 03:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.738 03:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2472288' 00:05:29.738 killing process with pid 2472288 00:05:29.738 03:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2472288 00:05:29.738 03:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2472288 00:05:32.267 03:16:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2472288 00:05:32.267 03:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:32.267 03:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2472288 00:05:32.267 03:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:32.267 03:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.267 03:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:32.267 03:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.267 03:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2472288 00:05:32.267 03:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2472288 ']' 00:05:32.267 03:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.267 03:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.267 03:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.267 03:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.267 03:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.267 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2472288) - No such process 00:05:32.267 ERROR: process (pid: 2472288) is no longer running 00:05:32.267 03:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.267 03:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:32.267 03:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:32.267 03:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:32.267 03:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:32.267 03:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:32.267 03:16:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:32.267 03:16:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:32.267 03:16:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:32.267 03:16:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:32.267 00:05:32.267 real 0m3.850s 00:05:32.267 user 0m3.820s 00:05:32.267 sys 0m0.651s 00:05:32.267 03:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.267 03:16:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.267 ************************************ 00:05:32.267 END TEST default_locks 00:05:32.267 ************************************ 00:05:32.267 03:16:33 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:32.267 03:16:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.267 03:16:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.267 03:16:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.267 ************************************ 00:05:32.267 START TEST default_locks_via_rpc 00:05:32.267 ************************************ 00:05:32.267 03:16:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:32.267 03:16:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2472990 00:05:32.267 03:16:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2472990 00:05:32.267 03:16:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.267 03:16:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2472990 ']' 00:05:32.267 03:16:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.267 03:16:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.267 03:16:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.267 03:16:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.267 03:16:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.267 [2024-12-13 03:16:33.199458] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:32.267 [2024-12-13 03:16:33.199565] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2472990 ] 00:05:32.267 [2024-12-13 03:16:33.311454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.267 [2024-12-13 03:16:33.416005] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.200 03:16:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.200 03:16:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:33.200 03:16:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:33.200 03:16:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.200 03:16:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.200 03:16:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.200 03:16:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:33.200 03:16:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:33.200 03:16:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:33.200 03:16:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:33.200 03:16:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:33.200 03:16:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.200 03:16:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.200 03:16:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.200 03:16:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2472990 00:05:33.200 03:16:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2472990 00:05:33.200 03:16:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:33.765 03:16:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2472990 00:05:33.765 03:16:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2472990 ']' 00:05:33.765 03:16:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2472990 00:05:33.765 03:16:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:33.765 03:16:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.765 03:16:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2472990 00:05:33.765 03:16:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.765 03:16:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.765 03:16:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2472990' 00:05:33.765 killing process with pid 2472990 00:05:33.765 03:16:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2472990 00:05:33.765 03:16:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2472990 00:05:36.292 00:05:36.292 real 0m3.939s 00:05:36.292 user 0m3.934s 00:05:36.292 sys 0m0.674s 00:05:36.292 03:16:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.292 03:16:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.292 ************************************ 00:05:36.292 END TEST default_locks_via_rpc 00:05:36.292 ************************************ 00:05:36.292 03:16:37 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:36.292 03:16:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.292 03:16:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.292 03:16:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.292 ************************************ 00:05:36.292 START TEST non_locking_app_on_locked_coremask 00:05:36.292 ************************************ 00:05:36.292 03:16:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:36.292 03:16:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.292 03:16:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2473691 00:05:36.292 03:16:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2473691 /var/tmp/spdk.sock 00:05:36.292 03:16:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2473691 ']' 00:05:36.292 03:16:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.292 03:16:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.292 03:16:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.292 03:16:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.292 03:16:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.292 [2024-12-13 03:16:37.184261] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:36.292 [2024-12-13 03:16:37.184354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2473691 ] 00:05:36.292 [2024-12-13 03:16:37.295722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.292 [2024-12-13 03:16:37.403394] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.225 03:16:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.225 03:16:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:37.225 03:16:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2473920 00:05:37.225 03:16:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2473920 /var/tmp/spdk2.sock 00:05:37.225 03:16:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:37.226 03:16:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2473920 ']' 00:05:37.226 03:16:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.226 03:16:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.226 03:16:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.226 03:16:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.226 03:16:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.226 [2024-12-13 03:16:38.306979] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:37.226 [2024-12-13 03:16:38.307084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2473920 ] 00:05:37.483 [2024-12-13 03:16:38.462752] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:37.483 [2024-12-13 03:16:38.462795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.483 [2024-12-13 03:16:38.672536] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.009 03:16:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.009 03:16:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:40.009 03:16:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2473691 00:05:40.009 03:16:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2473691 00:05:40.009 03:16:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:40.266 lslocks: write error 00:05:40.266 03:16:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2473691 00:05:40.266 03:16:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2473691 ']' 00:05:40.266 03:16:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2473691 00:05:40.266 03:16:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:40.266 03:16:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.266 03:16:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2473691 00:05:40.266 03:16:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.266 03:16:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.266 03:16:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2473691' 00:05:40.266 killing process with pid 2473691 00:05:40.266 03:16:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2473691 00:05:40.266 03:16:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2473691 00:05:45.682 03:16:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2473920 00:05:45.682 03:16:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2473920 ']' 00:05:45.682 03:16:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2473920 00:05:45.682 03:16:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:45.682 03:16:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.682 03:16:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2473920 00:05:45.682 03:16:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.682 03:16:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.682 03:16:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2473920' 00:05:45.682 killing process with pid 2473920 00:05:45.682 03:16:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2473920 00:05:45.682 03:16:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2473920 00:05:47.580 00:05:47.580 real 0m11.162s 00:05:47.580 user 0m11.426s 00:05:47.580 sys 0m1.206s 00:05:47.580 03:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.580 03:16:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.580 ************************************ 00:05:47.580 END TEST non_locking_app_on_locked_coremask 00:05:47.580 ************************************ 00:05:47.580 03:16:48 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:47.580 03:16:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.580 03:16:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.580 03:16:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.580 ************************************ 00:05:47.580 START TEST locking_app_on_unlocked_coremask 00:05:47.580 ************************************ 00:05:47.580 03:16:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:47.580 03:16:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2475533 00:05:47.580 03:16:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2475533 /var/tmp/spdk.sock 00:05:47.580 03:16:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:47.580 03:16:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2475533 ']' 00:05:47.580 03:16:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.580 03:16:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.581 03:16:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.581 03:16:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.581 03:16:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.581 [2024-12-13 03:16:48.433423] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:47.581 [2024-12-13 03:16:48.433513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2475533 ] 00:05:47.581 [2024-12-13 03:16:48.545360] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:47.581 [2024-12-13 03:16:48.545394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.581 [2024-12-13 03:16:48.644232] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.515 03:16:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.515 03:16:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:48.515 03:16:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2475764 00:05:48.515 03:16:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2475764 /var/tmp/spdk2.sock 00:05:48.515 03:16:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:48.515 03:16:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2475764 ']' 00:05:48.515 03:16:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:48.515 03:16:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.515 03:16:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:48.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:48.515 03:16:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.515 03:16:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.515 [2024-12-13 03:16:49.545143] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:48.515 [2024-12-13 03:16:49.545231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2475764 ] 00:05:48.515 [2024-12-13 03:16:49.701046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.773 [2024-12-13 03:16:49.903306] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.301 03:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.301 03:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:51.301 03:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2475764 00:05:51.301 03:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2475764 00:05:51.301 03:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:51.558 lslocks: write error 00:05:51.558 03:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2475533 00:05:51.558 03:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2475533 ']' 00:05:51.558 03:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2475533 00:05:51.558 03:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:51.558 03:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.558 03:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2475533 00:05:51.558 03:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.558 03:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.558 03:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2475533' 00:05:51.558 killing process with pid 2475533 00:05:51.558 03:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2475533 00:05:51.558 03:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2475533 00:05:56.818 03:16:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2475764 00:05:56.818 03:16:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2475764 ']' 00:05:56.818 03:16:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2475764 00:05:56.818 03:16:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:56.818 03:16:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.818 03:16:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2475764 00:05:56.818 03:16:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.818 03:16:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.818 03:16:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2475764' 00:05:56.818 killing process with pid 2475764 00:05:56.818 03:16:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2475764 00:05:56.818 03:16:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2475764 00:05:58.717 00:05:58.717 real 0m11.249s 00:05:58.717 user 0m11.503s 00:05:58.717 sys 0m1.237s 00:05:58.717 03:16:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.717 03:16:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.717 ************************************ 00:05:58.717 END TEST locking_app_on_unlocked_coremask 00:05:58.717 ************************************ 00:05:58.717 03:16:59 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:58.717 03:16:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.717 03:16:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.717 03:16:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.717 ************************************ 00:05:58.717 START TEST locking_app_on_locked_coremask 00:05:58.717 ************************************ 00:05:58.717 03:16:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:58.717 03:16:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2477527 00:05:58.717 03:16:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2477527 /var/tmp/spdk.sock 00:05:58.717 03:16:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.717 03:16:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2477527 ']' 00:05:58.717 03:16:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.717 03:16:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.717 03:16:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.717 03:16:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.717 03:16:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.717 [2024-12-13 03:16:59.751526] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:58.717 [2024-12-13 03:16:59.751611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2477527 ] 00:05:58.717 [2024-12-13 03:16:59.866582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.975 [2024-12-13 03:16:59.977750] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.909 03:17:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.909 03:17:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:59.909 03:17:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2477615 00:05:59.909 03:17:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2477615 /var/tmp/spdk2.sock 00:05:59.909 03:17:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:59.909 03:17:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:59.909 03:17:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2477615 /var/tmp/spdk2.sock 00:05:59.909 03:17:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:59.909 03:17:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.909 03:17:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:59.909 03:17:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.909 03:17:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2477615 /var/tmp/spdk2.sock 00:05:59.909 03:17:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2477615 ']' 00:05:59.909 03:17:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.909 03:17:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.909 03:17:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.909 03:17:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.909 03:17:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.909 [2024-12-13 03:17:00.881075] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:59.909 [2024-12-13 03:17:00.881168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2477615 ] 00:05:59.909 [2024-12-13 03:17:01.042244] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2477527 has claimed it. 00:05:59.909 [2024-12-13 03:17:01.042295] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:00.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2477615) - No such process 00:06:00.475 ERROR: process (pid: 2477615) is no longer running 00:06:00.475 03:17:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.475 03:17:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:00.475 03:17:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:00.475 03:17:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:00.475 03:17:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:00.475 03:17:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:00.475 03:17:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2477527 00:06:00.475 03:17:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2477527 00:06:00.475 03:17:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.734 lslocks: write error 00:06:00.734 03:17:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2477527 00:06:00.734 03:17:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2477527 ']' 00:06:00.734 03:17:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2477527 00:06:00.734 03:17:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:00.734 03:17:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.734 03:17:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2477527 00:06:00.734 03:17:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.734 03:17:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.734 03:17:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2477527' 00:06:00.734 killing process with pid 2477527 00:06:00.734 03:17:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2477527 00:06:00.734 03:17:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2477527 00:06:03.269 00:06:03.269 real 0m4.518s 00:06:03.269 user 0m4.676s 00:06:03.269 sys 0m0.803s 00:06:03.269 03:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.269 03:17:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.269 ************************************ 00:06:03.269 END TEST locking_app_on_locked_coremask 00:06:03.269 ************************************ 00:06:03.269 03:17:04 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:03.269 03:17:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.269 03:17:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.269 03:17:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.269 ************************************ 00:06:03.269 START TEST locking_overlapped_coremask 00:06:03.269 ************************************ 00:06:03.269 03:17:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:03.269 03:17:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2478295 00:06:03.269 03:17:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2478295 /var/tmp/spdk.sock 00:06:03.269 03:17:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:03.269 03:17:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2478295 ']' 00:06:03.269 03:17:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.269 03:17:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.269 03:17:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.269 03:17:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.269 03:17:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.269 [2024-12-13 03:17:04.342542] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:03.269 [2024-12-13 03:17:04.342629] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2478295 ] 00:06:03.269 [2024-12-13 03:17:04.454239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:03.528 [2024-12-13 03:17:04.561190] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.528 [2024-12-13 03:17:04.561257] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.528 [2024-12-13 03:17:04.561262] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.464 03:17:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.464 03:17:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:04.464 03:17:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2478521 00:06:04.464 03:17:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:04.464 03:17:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2478521 /var/tmp/spdk2.sock 00:06:04.464 03:17:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:04.464 03:17:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2478521 /var/tmp/spdk2.sock 00:06:04.464 03:17:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:04.464 03:17:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.464 03:17:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:04.464 03:17:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.464 03:17:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2478521 /var/tmp/spdk2.sock 00:06:04.464 03:17:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2478521 ']' 00:06:04.464 03:17:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.464 03:17:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.464 03:17:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.464 03:17:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.464 03:17:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.464 [2024-12-13 03:17:05.486608] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:04.464 [2024-12-13 03:17:05.486696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2478521 ] 00:06:04.464 [2024-12-13 03:17:05.643071] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2478295 has claimed it. 00:06:04.464 [2024-12-13 03:17:05.643126] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:05.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2478521) - No such process 00:06:05.031 ERROR: process (pid: 2478521) is no longer running 00:06:05.031 03:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.031 03:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:05.031 03:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:05.031 03:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:05.031 03:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:05.031 03:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:05.031 03:17:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:05.031 03:17:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:05.031 03:17:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:05.031 03:17:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:05.031 03:17:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2478295 00:06:05.031 03:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2478295 ']' 00:06:05.031 03:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2478295 00:06:05.031 03:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:05.031 03:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.031 03:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2478295 00:06:05.031 03:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.031 03:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.031 03:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2478295' 00:06:05.031 killing process with pid 2478295 00:06:05.031 03:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2478295 00:06:05.031 03:17:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2478295 00:06:07.563 00:06:07.563 real 0m4.289s 00:06:07.563 user 0m11.819s 00:06:07.563 sys 0m0.629s 00:06:07.563 03:17:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.563 03:17:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.563 ************************************ 00:06:07.563 END TEST locking_overlapped_coremask 00:06:07.563 ************************************ 00:06:07.563 03:17:08 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:07.563 03:17:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.563 03:17:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.563 03:17:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.563 ************************************ 00:06:07.563 START TEST locking_overlapped_coremask_via_rpc 00:06:07.563 ************************************ 00:06:07.563 03:17:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:07.563 03:17:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2479008 00:06:07.563 03:17:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2479008 /var/tmp/spdk.sock 00:06:07.563 03:17:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:07.563 03:17:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2479008 ']' 00:06:07.563 03:17:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.563 03:17:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.563 03:17:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.563 03:17:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.563 03:17:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.563 [2024-12-13 03:17:08.699598] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:07.563 [2024-12-13 03:17:08.699684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2479008 ] 00:06:07.822 [2024-12-13 03:17:08.809329] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:07.822 [2024-12-13 03:17:08.809363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:07.822 [2024-12-13 03:17:08.918626] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.822 [2024-12-13 03:17:08.918694] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.822 [2024-12-13 03:17:08.918700] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.755 03:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.755 03:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:08.756 03:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2479234 00:06:08.756 03:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2479234 /var/tmp/spdk2.sock 00:06:08.756 03:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:08.756 03:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2479234 ']' 00:06:08.756 03:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.756 03:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.756 03:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.756 03:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.756 03:17:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.756 [2024-12-13 03:17:09.855030] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:08.756 [2024-12-13 03:17:09.855121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2479234 ] 00:06:09.014 [2024-12-13 03:17:10.011993] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:09.014 [2024-12-13 03:17:10.012042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.272 [2024-12-13 03:17:10.243255] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:09.272 [2024-12-13 03:17:10.243338] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.272 [2024-12-13 03:17:10.243364] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:11.173 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.173 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:11.173 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:11.173 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.173 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.173 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.173 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:11.173 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:11.173 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:11.173 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:11.173 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.173 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:11.173 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.173 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:11.173 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.173 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.173 [2024-12-13 03:17:12.378047] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2479008 has claimed it. 00:06:11.431 request: 00:06:11.431 { 00:06:11.431 "method": "framework_enable_cpumask_locks", 00:06:11.431 "req_id": 1 00:06:11.431 } 00:06:11.431 Got JSON-RPC error response 00:06:11.431 response: 00:06:11.431 { 00:06:11.431 "code": -32603, 00:06:11.431 "message": "Failed to claim CPU core: 2" 00:06:11.431 } 00:06:11.431 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:11.431 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:11.431 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:11.431 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:11.431 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:11.431 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2479008 /var/tmp/spdk.sock 00:06:11.431 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2479008 ']' 00:06:11.431 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.431 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.431 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.431 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.431 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.431 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.431 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:11.431 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2479234 /var/tmp/spdk2.sock 00:06:11.431 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2479234 ']' 00:06:11.431 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.431 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.431 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.431 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.431 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.689 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.689 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:11.689 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:11.689 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:11.689 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:11.689 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:11.689 00:06:11.689 real 0m4.171s 00:06:11.689 user 0m1.125s 00:06:11.689 sys 0m0.192s 00:06:11.689 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.689 03:17:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.689 ************************************ 00:06:11.689 END TEST locking_overlapped_coremask_via_rpc 00:06:11.689 ************************************ 00:06:11.689 03:17:12 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:11.689 03:17:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2479008 ]] 00:06:11.690 03:17:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2479008 00:06:11.690 03:17:12 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2479008 ']' 00:06:11.690 03:17:12 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2479008 00:06:11.690 03:17:12 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:11.690 03:17:12 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.690 03:17:12 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2479008 00:06:11.690 03:17:12 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.690 03:17:12 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.690 03:17:12 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2479008' 00:06:11.690 killing process with pid 2479008 00:06:11.690 03:17:12 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2479008 00:06:11.690 03:17:12 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2479008 00:06:14.222 03:17:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2479234 ]] 00:06:14.222 03:17:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2479234 00:06:14.222 03:17:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2479234 ']' 00:06:14.222 03:17:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2479234 00:06:14.222 03:17:15 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:14.222 03:17:15 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.222 03:17:15 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2479234 00:06:14.222 03:17:15 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:14.222 03:17:15 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:14.222 03:17:15 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2479234' 00:06:14.222 killing process with pid 2479234 00:06:14.222 03:17:15 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2479234 00:06:14.222 03:17:15 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2479234 00:06:16.752 03:17:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:16.752 03:17:17 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:16.752 03:17:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2479008 ]] 00:06:16.752 03:17:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2479008 00:06:16.752 03:17:17 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2479008 ']' 00:06:16.752 03:17:17 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2479008 00:06:16.752 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2479008) - No such process 00:06:16.752 03:17:17 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2479008 is not found' 00:06:16.752 Process with pid 2479008 is not found 00:06:16.752 03:17:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2479234 ]] 00:06:16.752 03:17:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2479234 00:06:16.752 03:17:17 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2479234 ']' 00:06:16.752 03:17:17 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2479234 00:06:16.752 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2479234) - No such process 00:06:16.752 03:17:17 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2479234 is not found' 00:06:16.752 Process with pid 2479234 is not found 00:06:16.752 03:17:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:16.752 00:06:16.752 real 0m48.833s 00:06:16.752 user 1m24.332s 00:06:16.752 sys 0m6.604s 00:06:16.752 03:17:17 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.752 03:17:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.752 ************************************ 00:06:16.752 END TEST cpu_locks 00:06:16.752 ************************************ 00:06:16.752 00:06:16.752 real 1m17.990s 00:06:16.752 user 2m20.708s 00:06:16.752 sys 0m10.397s 00:06:16.752 03:17:17 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.752 03:17:17 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.752 ************************************ 00:06:16.752 END TEST event 00:06:16.752 ************************************ 00:06:16.752 03:17:17 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:16.752 03:17:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.752 03:17:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.752 03:17:17 -- common/autotest_common.sh@10 -- # set +x 00:06:16.752 ************************************ 00:06:16.752 START TEST thread 00:06:16.752 ************************************ 00:06:16.752 03:17:17 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:17.011 * Looking for test storage... 00:06:17.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:17.011 03:17:18 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:17.011 03:17:18 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:17.011 03:17:18 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:17.011 03:17:18 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:17.011 03:17:18 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.011 03:17:18 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.011 03:17:18 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.011 03:17:18 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.011 03:17:18 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.011 03:17:18 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.011 03:17:18 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.011 03:17:18 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.011 03:17:18 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.011 03:17:18 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.011 03:17:18 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.011 03:17:18 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:17.011 03:17:18 thread -- scripts/common.sh@345 -- # : 1 00:06:17.011 03:17:18 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.011 03:17:18 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.011 03:17:18 thread -- scripts/common.sh@365 -- # decimal 1 00:06:17.011 03:17:18 thread -- scripts/common.sh@353 -- # local d=1 00:06:17.011 03:17:18 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.011 03:17:18 thread -- scripts/common.sh@355 -- # echo 1 00:06:17.011 03:17:18 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.011 03:17:18 thread -- scripts/common.sh@366 -- # decimal 2 00:06:17.011 03:17:18 thread -- scripts/common.sh@353 -- # local d=2 00:06:17.011 03:17:18 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.011 03:17:18 thread -- scripts/common.sh@355 -- # echo 2 00:06:17.011 03:17:18 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.011 03:17:18 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.011 03:17:18 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.011 03:17:18 thread -- scripts/common.sh@368 -- # return 0 00:06:17.011 03:17:18 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.011 03:17:18 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:17.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.011 --rc genhtml_branch_coverage=1 00:06:17.011 --rc genhtml_function_coverage=1 00:06:17.011 --rc genhtml_legend=1 00:06:17.011 --rc geninfo_all_blocks=1 00:06:17.011 --rc geninfo_unexecuted_blocks=1 00:06:17.011 00:06:17.011 ' 00:06:17.011 03:17:18 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:17.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.011 --rc genhtml_branch_coverage=1 00:06:17.011 --rc genhtml_function_coverage=1 00:06:17.011 --rc genhtml_legend=1 00:06:17.011 --rc geninfo_all_blocks=1 00:06:17.011 --rc geninfo_unexecuted_blocks=1 00:06:17.011 00:06:17.011 ' 00:06:17.011 03:17:18 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:17.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.011 --rc genhtml_branch_coverage=1 00:06:17.011 --rc genhtml_function_coverage=1 00:06:17.011 --rc genhtml_legend=1 00:06:17.011 --rc geninfo_all_blocks=1 00:06:17.011 --rc geninfo_unexecuted_blocks=1 00:06:17.011 00:06:17.011 ' 00:06:17.011 03:17:18 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:17.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.011 --rc genhtml_branch_coverage=1 00:06:17.011 --rc genhtml_function_coverage=1 00:06:17.011 --rc genhtml_legend=1 00:06:17.011 --rc geninfo_all_blocks=1 00:06:17.011 --rc geninfo_unexecuted_blocks=1 00:06:17.011 00:06:17.011 ' 00:06:17.011 03:17:18 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:17.011 03:17:18 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:17.011 03:17:18 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.011 03:17:18 thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.011 ************************************ 00:06:17.011 START TEST thread_poller_perf 00:06:17.011 ************************************ 00:06:17.011 03:17:18 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:17.011 [2024-12-13 03:17:18.175278] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:17.011 [2024-12-13 03:17:18.175361] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2480685 ] 00:06:17.270 [2024-12-13 03:17:18.288265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.270 [2024-12-13 03:17:18.392071] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.270 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:18.646 [2024-12-13T02:17:19.855Z] ====================================== 00:06:18.646 [2024-12-13T02:17:19.855Z] busy:2109114408 (cyc) 00:06:18.646 [2024-12-13T02:17:19.855Z] total_run_count: 405000 00:06:18.646 [2024-12-13T02:17:19.855Z] tsc_hz: 2100000000 (cyc) 00:06:18.646 [2024-12-13T02:17:19.855Z] ====================================== 00:06:18.646 [2024-12-13T02:17:19.855Z] poller_cost: 5207 (cyc), 2479 (nsec) 00:06:18.646 00:06:18.646 real 0m1.474s 00:06:18.646 user 0m1.349s 00:06:18.646 sys 0m0.119s 00:06:18.646 03:17:19 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.646 03:17:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:18.646 ************************************ 00:06:18.646 END TEST thread_poller_perf 00:06:18.646 ************************************ 00:06:18.646 03:17:19 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:18.646 03:17:19 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:18.646 03:17:19 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.646 03:17:19 thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.646 ************************************ 00:06:18.646 START TEST thread_poller_perf 00:06:18.646 ************************************ 00:06:18.646 03:17:19 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:18.646 [2024-12-13 03:17:19.723501] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:18.646 [2024-12-13 03:17:19.723583] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2480939 ] 00:06:18.646 [2024-12-13 03:17:19.841898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.904 [2024-12-13 03:17:19.947430] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.904 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:20.278 [2024-12-13T02:17:21.487Z] ====================================== 00:06:20.278 [2024-12-13T02:17:21.487Z] busy:2102432282 (cyc) 00:06:20.278 [2024-12-13T02:17:21.487Z] total_run_count: 4746000 00:06:20.278 [2024-12-13T02:17:21.487Z] tsc_hz: 2100000000 (cyc) 00:06:20.278 [2024-12-13T02:17:21.487Z] ====================================== 00:06:20.278 [2024-12-13T02:17:21.487Z] poller_cost: 442 (cyc), 210 (nsec) 00:06:20.278 00:06:20.278 real 0m1.479s 00:06:20.278 user 0m1.345s 00:06:20.278 sys 0m0.128s 00:06:20.278 03:17:21 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.278 03:17:21 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:20.278 ************************************ 00:06:20.278 END TEST thread_poller_perf 00:06:20.278 ************************************ 00:06:20.278 03:17:21 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:20.278 00:06:20.278 real 0m3.266s 00:06:20.278 user 0m2.857s 00:06:20.278 sys 0m0.420s 00:06:20.278 03:17:21 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.278 03:17:21 thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.278 ************************************ 00:06:20.278 END TEST thread 00:06:20.278 ************************************ 00:06:20.278 03:17:21 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:20.278 03:17:21 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:20.278 03:17:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.278 03:17:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.278 03:17:21 -- common/autotest_common.sh@10 -- # set +x 00:06:20.278 ************************************ 00:06:20.278 START TEST app_cmdline 00:06:20.278 ************************************ 00:06:20.278 03:17:21 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:20.278 * Looking for test storage... 00:06:20.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:20.278 03:17:21 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:20.278 03:17:21 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:20.278 03:17:21 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:20.278 03:17:21 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:20.278 03:17:21 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.278 03:17:21 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.278 03:17:21 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.278 03:17:21 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.278 03:17:21 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.278 03:17:21 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.278 03:17:21 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.278 03:17:21 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.278 03:17:21 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.278 03:17:21 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.278 03:17:21 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.278 03:17:21 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:20.278 03:17:21 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:20.278 03:17:21 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.278 03:17:21 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.278 03:17:21 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:20.278 03:17:21 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:20.278 03:17:21 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.278 03:17:21 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:20.278 03:17:21 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.278 03:17:21 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:20.278 03:17:21 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:20.278 03:17:21 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.278 03:17:21 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:20.278 03:17:21 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.278 03:17:21 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.278 03:17:21 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.278 03:17:21 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:20.278 03:17:21 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.278 03:17:21 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:20.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.278 --rc genhtml_branch_coverage=1 00:06:20.278 --rc genhtml_function_coverage=1 00:06:20.278 --rc genhtml_legend=1 00:06:20.278 --rc geninfo_all_blocks=1 00:06:20.278 --rc geninfo_unexecuted_blocks=1 00:06:20.278 00:06:20.278 ' 00:06:20.278 03:17:21 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:20.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.278 --rc genhtml_branch_coverage=1 00:06:20.278 --rc genhtml_function_coverage=1 00:06:20.278 --rc genhtml_legend=1 00:06:20.278 --rc geninfo_all_blocks=1 00:06:20.278 --rc geninfo_unexecuted_blocks=1 00:06:20.278 00:06:20.278 ' 00:06:20.278 03:17:21 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:20.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.278 --rc genhtml_branch_coverage=1 00:06:20.278 --rc genhtml_function_coverage=1 00:06:20.278 --rc genhtml_legend=1 00:06:20.278 --rc geninfo_all_blocks=1 00:06:20.278 --rc geninfo_unexecuted_blocks=1 00:06:20.278 00:06:20.278 ' 00:06:20.278 03:17:21 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:20.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.278 --rc genhtml_branch_coverage=1 00:06:20.278 --rc genhtml_function_coverage=1 00:06:20.279 --rc genhtml_legend=1 00:06:20.279 --rc geninfo_all_blocks=1 00:06:20.279 --rc geninfo_unexecuted_blocks=1 00:06:20.279 00:06:20.279 ' 00:06:20.279 03:17:21 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:20.279 03:17:21 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2481432 00:06:20.279 03:17:21 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:20.279 03:17:21 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2481432 00:06:20.279 03:17:21 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2481432 ']' 00:06:20.279 03:17:21 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.279 03:17:21 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.279 03:17:21 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.279 03:17:21 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.279 03:17:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:20.537 [2024-12-13 03:17:21.525994] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:20.537 [2024-12-13 03:17:21.526084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2481432 ] 00:06:20.537 [2024-12-13 03:17:21.637308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.537 [2024-12-13 03:17:21.740581] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.471 03:17:22 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.471 03:17:22 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:21.471 03:17:22 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:21.730 { 00:06:21.730 "version": "SPDK v25.01-pre git sha1 e01cb43b8", 00:06:21.730 "fields": { 00:06:21.730 "major": 25, 00:06:21.730 "minor": 1, 00:06:21.730 "patch": 0, 00:06:21.730 "suffix": "-pre", 00:06:21.730 "commit": "e01cb43b8" 00:06:21.730 } 00:06:21.730 } 00:06:21.730 03:17:22 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:21.730 03:17:22 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:21.730 03:17:22 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:21.730 03:17:22 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:21.730 03:17:22 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:21.730 03:17:22 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:21.730 03:17:22 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.730 03:17:22 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:21.730 03:17:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:21.730 03:17:22 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.730 03:17:22 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:21.730 03:17:22 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:21.730 03:17:22 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:21.730 03:17:22 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:21.730 03:17:22 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:21.730 03:17:22 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:21.730 03:17:22 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:21.730 03:17:22 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:21.730 03:17:22 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:21.730 03:17:22 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:21.730 03:17:22 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:21.730 03:17:22 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:21.730 03:17:22 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:21.730 03:17:22 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:21.730 request: 00:06:21.730 { 00:06:21.730 "method": "env_dpdk_get_mem_stats", 00:06:21.730 "req_id": 1 00:06:21.730 } 00:06:21.730 Got JSON-RPC error response 00:06:21.730 response: 00:06:21.730 { 00:06:21.730 "code": -32601, 00:06:21.730 "message": "Method not found" 00:06:21.730 } 00:06:21.989 03:17:22 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:21.989 03:17:22 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:21.989 03:17:22 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:21.989 03:17:22 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:21.989 03:17:22 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2481432 00:06:21.989 03:17:22 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2481432 ']' 00:06:21.989 03:17:22 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2481432 00:06:21.989 03:17:22 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:21.989 03:17:22 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.989 03:17:22 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2481432 00:06:21.989 03:17:22 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.989 03:17:22 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.989 03:17:22 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2481432' 00:06:21.989 killing process with pid 2481432 00:06:21.989 03:17:22 app_cmdline -- common/autotest_common.sh@973 -- # kill 2481432 00:06:21.989 03:17:22 app_cmdline -- common/autotest_common.sh@978 -- # wait 2481432 00:06:24.519 00:06:24.519 real 0m4.023s 00:06:24.519 user 0m4.259s 00:06:24.519 sys 0m0.585s 00:06:24.519 03:17:25 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.519 03:17:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:24.519 ************************************ 00:06:24.519 END TEST app_cmdline 00:06:24.519 ************************************ 00:06:24.519 03:17:25 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:24.519 03:17:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.519 03:17:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.519 03:17:25 -- common/autotest_common.sh@10 -- # set +x 00:06:24.519 ************************************ 00:06:24.519 START TEST version 00:06:24.519 ************************************ 00:06:24.519 03:17:25 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:24.519 * Looking for test storage... 00:06:24.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:24.519 03:17:25 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:24.519 03:17:25 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:24.519 03:17:25 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:24.519 03:17:25 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:24.519 03:17:25 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.519 03:17:25 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.519 03:17:25 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.519 03:17:25 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.519 03:17:25 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.519 03:17:25 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.519 03:17:25 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.519 03:17:25 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.519 03:17:25 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.519 03:17:25 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.519 03:17:25 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.519 03:17:25 version -- scripts/common.sh@344 -- # case "$op" in 00:06:24.519 03:17:25 version -- scripts/common.sh@345 -- # : 1 00:06:24.519 03:17:25 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.519 03:17:25 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.519 03:17:25 version -- scripts/common.sh@365 -- # decimal 1 00:06:24.519 03:17:25 version -- scripts/common.sh@353 -- # local d=1 00:06:24.519 03:17:25 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.519 03:17:25 version -- scripts/common.sh@355 -- # echo 1 00:06:24.519 03:17:25 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.519 03:17:25 version -- scripts/common.sh@366 -- # decimal 2 00:06:24.519 03:17:25 version -- scripts/common.sh@353 -- # local d=2 00:06:24.519 03:17:25 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.519 03:17:25 version -- scripts/common.sh@355 -- # echo 2 00:06:24.519 03:17:25 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.519 03:17:25 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.519 03:17:25 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.519 03:17:25 version -- scripts/common.sh@368 -- # return 0 00:06:24.519 03:17:25 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.519 03:17:25 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:24.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.519 --rc genhtml_branch_coverage=1 00:06:24.519 --rc genhtml_function_coverage=1 00:06:24.519 --rc genhtml_legend=1 00:06:24.519 --rc geninfo_all_blocks=1 00:06:24.519 --rc geninfo_unexecuted_blocks=1 00:06:24.519 00:06:24.519 ' 00:06:24.519 03:17:25 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:24.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.519 --rc genhtml_branch_coverage=1 00:06:24.519 --rc genhtml_function_coverage=1 00:06:24.519 --rc genhtml_legend=1 00:06:24.519 --rc geninfo_all_blocks=1 00:06:24.519 --rc geninfo_unexecuted_blocks=1 00:06:24.519 00:06:24.519 ' 00:06:24.519 03:17:25 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:24.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.519 --rc genhtml_branch_coverage=1 00:06:24.519 --rc genhtml_function_coverage=1 00:06:24.519 --rc genhtml_legend=1 00:06:24.519 --rc geninfo_all_blocks=1 00:06:24.519 --rc geninfo_unexecuted_blocks=1 00:06:24.519 00:06:24.520 ' 00:06:24.520 03:17:25 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:24.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.520 --rc genhtml_branch_coverage=1 00:06:24.520 --rc genhtml_function_coverage=1 00:06:24.520 --rc genhtml_legend=1 00:06:24.520 --rc geninfo_all_blocks=1 00:06:24.520 --rc geninfo_unexecuted_blocks=1 00:06:24.520 00:06:24.520 ' 00:06:24.520 03:17:25 version -- app/version.sh@17 -- # get_header_version major 00:06:24.520 03:17:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:24.520 03:17:25 version -- app/version.sh@14 -- # cut -f2 00:06:24.520 03:17:25 version -- app/version.sh@14 -- # tr -d '"' 00:06:24.520 03:17:25 version -- app/version.sh@17 -- # major=25 00:06:24.520 03:17:25 version -- app/version.sh@18 -- # get_header_version minor 00:06:24.520 03:17:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:24.520 03:17:25 version -- app/version.sh@14 -- # cut -f2 00:06:24.520 03:17:25 version -- app/version.sh@14 -- # tr -d '"' 00:06:24.520 03:17:25 version -- app/version.sh@18 -- # minor=1 00:06:24.520 03:17:25 version -- app/version.sh@19 -- # get_header_version patch 00:06:24.520 03:17:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:24.520 03:17:25 version -- app/version.sh@14 -- # cut -f2 00:06:24.520 03:17:25 version -- app/version.sh@14 -- # tr -d '"' 00:06:24.520 03:17:25 version -- app/version.sh@19 -- # patch=0 00:06:24.520 03:17:25 version -- app/version.sh@20 -- # get_header_version suffix 00:06:24.520 03:17:25 version -- app/version.sh@14 -- # cut -f2 00:06:24.520 03:17:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:24.520 03:17:25 version -- app/version.sh@14 -- # tr -d '"' 00:06:24.520 03:17:25 version -- app/version.sh@20 -- # suffix=-pre 00:06:24.520 03:17:25 version -- app/version.sh@22 -- # version=25.1 00:06:24.520 03:17:25 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:24.520 03:17:25 version -- app/version.sh@28 -- # version=25.1rc0 00:06:24.520 03:17:25 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:24.520 03:17:25 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:24.520 03:17:25 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:24.520 03:17:25 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:24.520 00:06:24.520 real 0m0.220s 00:06:24.520 user 0m0.147s 00:06:24.520 sys 0m0.109s 00:06:24.520 03:17:25 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.520 03:17:25 version -- common/autotest_common.sh@10 -- # set +x 00:06:24.520 ************************************ 00:06:24.520 END TEST version 00:06:24.520 ************************************ 00:06:24.520 03:17:25 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:24.520 03:17:25 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:24.520 03:17:25 -- spdk/autotest.sh@194 -- # uname -s 00:06:24.520 03:17:25 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:24.520 03:17:25 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:24.520 03:17:25 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:24.520 03:17:25 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:24.520 03:17:25 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:24.520 03:17:25 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:24.520 03:17:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:24.520 03:17:25 -- common/autotest_common.sh@10 -- # set +x 00:06:24.520 03:17:25 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:24.520 03:17:25 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:24.520 03:17:25 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:24.520 03:17:25 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:24.520 03:17:25 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:24.520 03:17:25 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:24.520 03:17:25 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:24.520 03:17:25 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:24.520 03:17:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.520 03:17:25 -- common/autotest_common.sh@10 -- # set +x 00:06:24.520 ************************************ 00:06:24.520 START TEST nvmf_tcp 00:06:24.520 ************************************ 00:06:24.520 03:17:25 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:24.781 * Looking for test storage... 00:06:24.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:24.781 03:17:25 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:24.781 03:17:25 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:24.781 03:17:25 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:24.781 03:17:25 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:24.781 03:17:25 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.781 03:17:25 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.781 03:17:25 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.781 03:17:25 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.782 03:17:25 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.782 03:17:25 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.782 03:17:25 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.782 03:17:25 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.782 03:17:25 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.782 03:17:25 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.782 03:17:25 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.782 03:17:25 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:24.782 03:17:25 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:24.782 03:17:25 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.782 03:17:25 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.782 03:17:25 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:24.782 03:17:25 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:24.782 03:17:25 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.782 03:17:25 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:24.782 03:17:25 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.782 03:17:25 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:24.782 03:17:25 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:24.782 03:17:25 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.782 03:17:25 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:24.782 03:17:25 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.782 03:17:25 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.782 03:17:25 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.782 03:17:25 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:24.782 03:17:25 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.782 03:17:25 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:24.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.782 --rc genhtml_branch_coverage=1 00:06:24.782 --rc genhtml_function_coverage=1 00:06:24.782 --rc genhtml_legend=1 00:06:24.782 --rc geninfo_all_blocks=1 00:06:24.782 --rc geninfo_unexecuted_blocks=1 00:06:24.782 00:06:24.782 ' 00:06:24.782 03:17:25 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:24.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.782 --rc genhtml_branch_coverage=1 00:06:24.782 --rc genhtml_function_coverage=1 00:06:24.782 --rc genhtml_legend=1 00:06:24.782 --rc geninfo_all_blocks=1 00:06:24.782 --rc geninfo_unexecuted_blocks=1 00:06:24.782 00:06:24.782 ' 00:06:24.782 03:17:25 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:24.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.782 --rc genhtml_branch_coverage=1 00:06:24.782 --rc genhtml_function_coverage=1 00:06:24.782 --rc genhtml_legend=1 00:06:24.782 --rc geninfo_all_blocks=1 00:06:24.782 --rc geninfo_unexecuted_blocks=1 00:06:24.782 00:06:24.782 ' 00:06:24.782 03:17:25 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:24.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.782 --rc genhtml_branch_coverage=1 00:06:24.782 --rc genhtml_function_coverage=1 00:06:24.782 --rc genhtml_legend=1 00:06:24.782 --rc geninfo_all_blocks=1 00:06:24.782 --rc geninfo_unexecuted_blocks=1 00:06:24.782 00:06:24.782 ' 00:06:24.782 03:17:25 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:24.782 03:17:25 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:24.782 03:17:25 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:24.782 03:17:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:24.782 03:17:25 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.782 03:17:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:24.782 ************************************ 00:06:24.782 START TEST nvmf_target_core 00:06:24.782 ************************************ 00:06:24.782 03:17:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:24.782 * Looking for test storage... 00:06:24.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:24.782 03:17:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:24.782 03:17:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:06:24.782 03:17:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:25.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.094 --rc genhtml_branch_coverage=1 00:06:25.094 --rc genhtml_function_coverage=1 00:06:25.094 --rc genhtml_legend=1 00:06:25.094 --rc geninfo_all_blocks=1 00:06:25.094 --rc geninfo_unexecuted_blocks=1 00:06:25.094 00:06:25.094 ' 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:25.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.094 --rc genhtml_branch_coverage=1 00:06:25.094 --rc genhtml_function_coverage=1 00:06:25.094 --rc genhtml_legend=1 00:06:25.094 --rc geninfo_all_blocks=1 00:06:25.094 --rc geninfo_unexecuted_blocks=1 00:06:25.094 00:06:25.094 ' 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:25.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.094 --rc genhtml_branch_coverage=1 00:06:25.094 --rc genhtml_function_coverage=1 00:06:25.094 --rc genhtml_legend=1 00:06:25.094 --rc geninfo_all_blocks=1 00:06:25.094 --rc geninfo_unexecuted_blocks=1 00:06:25.094 00:06:25.094 ' 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:25.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.094 --rc genhtml_branch_coverage=1 00:06:25.094 --rc genhtml_function_coverage=1 00:06:25.094 --rc genhtml_legend=1 00:06:25.094 --rc geninfo_all_blocks=1 00:06:25.094 --rc geninfo_unexecuted_blocks=1 00:06:25.094 00:06:25.094 ' 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:25.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:25.094 ************************************ 00:06:25.094 START TEST nvmf_abort 00:06:25.094 ************************************ 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:25.094 * Looking for test storage... 00:06:25.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:25.094 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:25.095 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.095 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.095 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.095 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.095 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.095 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.095 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.095 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.095 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.095 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.095 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.095 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:25.095 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:25.095 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.095 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.095 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:25.095 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:25.095 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.095 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:25.095 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.095 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:25.380 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:25.380 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.380 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:25.380 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.380 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.380 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.380 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:25.380 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.380 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:25.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.380 --rc genhtml_branch_coverage=1 00:06:25.380 --rc genhtml_function_coverage=1 00:06:25.380 --rc genhtml_legend=1 00:06:25.380 --rc geninfo_all_blocks=1 00:06:25.380 --rc geninfo_unexecuted_blocks=1 00:06:25.380 00:06:25.380 ' 00:06:25.380 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:25.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.380 --rc genhtml_branch_coverage=1 00:06:25.380 --rc genhtml_function_coverage=1 00:06:25.380 --rc genhtml_legend=1 00:06:25.380 --rc geninfo_all_blocks=1 00:06:25.380 --rc geninfo_unexecuted_blocks=1 00:06:25.380 00:06:25.380 ' 00:06:25.380 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:25.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.380 --rc genhtml_branch_coverage=1 00:06:25.380 --rc genhtml_function_coverage=1 00:06:25.380 --rc genhtml_legend=1 00:06:25.380 --rc geninfo_all_blocks=1 00:06:25.380 --rc geninfo_unexecuted_blocks=1 00:06:25.380 00:06:25.380 ' 00:06:25.380 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:25.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.380 --rc genhtml_branch_coverage=1 00:06:25.380 --rc genhtml_function_coverage=1 00:06:25.380 --rc genhtml_legend=1 00:06:25.380 --rc geninfo_all_blocks=1 00:06:25.380 --rc geninfo_unexecuted_blocks=1 00:06:25.380 00:06:25.380 ' 00:06:25.380 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:25.380 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:25.380 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:25.380 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:25.380 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:25.380 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:25.380 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:25.380 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:25.380 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:25.380 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:25.380 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:25.380 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:25.380 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:25.380 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:25.380 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:25.380 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:25.380 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:25.380 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:25.380 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:25.380 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:25.380 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.380 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.380 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.380 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.380 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.381 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.381 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:25.381 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.381 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:25.381 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:25.381 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:25.381 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:25.381 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:25.381 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:25.381 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:25.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:25.381 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:25.381 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:25.381 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:25.381 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:25.381 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:25.381 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:25.381 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:25.381 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:25.381 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:25.381 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:25.381 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:25.381 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:25.381 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:25.381 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:25.381 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:25.381 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:25.381 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:25.381 03:17:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:30.645 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:30.645 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:30.645 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:30.645 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:30.645 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:30.645 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:30.645 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:30.645 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:30.645 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:30.645 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:30.645 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:30.645 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:30.645 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:30.645 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:30.645 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:30.645 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:30.645 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:30.645 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:30.645 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:30.645 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:30.645 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:30.645 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:30.645 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:30.645 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:30.645 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:30.645 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:30.645 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:30.645 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:30.646 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:30.646 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:30.646 Found net devices under 0000:af:00.0: cvl_0_0 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:30.646 Found net devices under 0000:af:00.1: cvl_0_1 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:30.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:30.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:06:30.646 00:06:30.646 --- 10.0.0.2 ping statistics --- 00:06:30.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:30.646 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:30.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:30.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:06:30.646 00:06:30.646 --- 10.0.0.1 ping statistics --- 00:06:30.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:30.646 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2485297 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2485297 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2485297 ']' 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.646 03:17:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:30.646 [2024-12-13 03:17:31.606757] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:30.646 [2024-12-13 03:17:31.606864] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:30.646 [2024-12-13 03:17:31.724784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:30.646 [2024-12-13 03:17:31.827346] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:30.646 [2024-12-13 03:17:31.827391] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:30.646 [2024-12-13 03:17:31.827401] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:30.646 [2024-12-13 03:17:31.827411] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:30.646 [2024-12-13 03:17:31.827418] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:30.646 [2024-12-13 03:17:31.829388] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.646 [2024-12-13 03:17:31.829467] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.646 [2024-12-13 03:17:31.829474] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.213 03:17:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.213 03:17:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:31.213 03:17:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:31.213 03:17:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:31.213 03:17:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:31.471 03:17:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:31.471 03:17:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:31.471 03:17:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.471 03:17:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:31.471 [2024-12-13 03:17:32.458670] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:31.471 03:17:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.471 03:17:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:31.471 03:17:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.471 03:17:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:31.471 Malloc0 00:06:31.471 03:17:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.471 03:17:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:31.471 03:17:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.471 03:17:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:31.471 Delay0 00:06:31.471 03:17:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.471 03:17:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:31.471 03:17:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.471 03:17:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:31.471 03:17:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.471 03:17:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:31.471 03:17:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.471 03:17:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:31.471 03:17:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.471 03:17:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:31.471 03:17:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.471 03:17:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:31.471 [2024-12-13 03:17:32.597107] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:31.472 03:17:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.472 03:17:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:31.472 03:17:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.472 03:17:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:31.472 03:17:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.472 03:17:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:31.730 [2024-12-13 03:17:32.751797] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:34.259 Initializing NVMe Controllers 00:06:34.259 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:34.259 controller IO queue size 128 less than required 00:06:34.259 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:34.259 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:34.259 Initialization complete. Launching workers. 00:06:34.259 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33826 00:06:34.259 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33883, failed to submit 66 00:06:34.259 success 33826, unsuccessful 57, failed 0 00:06:34.259 03:17:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:34.259 03:17:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.259 03:17:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:34.259 03:17:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.259 03:17:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:34.260 03:17:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:34.260 03:17:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:34.260 03:17:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:34.260 03:17:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:34.260 03:17:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:34.260 03:17:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:34.260 03:17:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:34.260 rmmod nvme_tcp 00:06:34.260 rmmod nvme_fabrics 00:06:34.260 rmmod nvme_keyring 00:06:34.260 03:17:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:34.260 03:17:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:34.260 03:17:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:34.260 03:17:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2485297 ']' 00:06:34.260 03:17:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2485297 00:06:34.260 03:17:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2485297 ']' 00:06:34.260 03:17:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2485297 00:06:34.260 03:17:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:34.260 03:17:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.260 03:17:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2485297 00:06:34.260 03:17:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:34.260 03:17:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:34.260 03:17:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2485297' 00:06:34.260 killing process with pid 2485297 00:06:34.260 03:17:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2485297 00:06:34.260 03:17:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2485297 00:06:35.195 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:35.195 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:35.195 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:35.195 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:35.195 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:35.195 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:35.195 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:35.195 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:35.195 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:35.195 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:35.195 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:35.195 03:17:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:37.728 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:37.728 00:06:37.728 real 0m12.286s 00:06:37.728 user 0m16.152s 00:06:37.728 sys 0m4.989s 00:06:37.728 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.728 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:37.728 ************************************ 00:06:37.728 END TEST nvmf_abort 00:06:37.728 ************************************ 00:06:37.728 03:17:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:37.728 03:17:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:37.728 03:17:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.728 03:17:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:37.728 ************************************ 00:06:37.728 START TEST nvmf_ns_hotplug_stress 00:06:37.728 ************************************ 00:06:37.728 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:37.728 * Looking for test storage... 00:06:37.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:37.728 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:37.728 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:37.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.729 --rc genhtml_branch_coverage=1 00:06:37.729 --rc genhtml_function_coverage=1 00:06:37.729 --rc genhtml_legend=1 00:06:37.729 --rc geninfo_all_blocks=1 00:06:37.729 --rc geninfo_unexecuted_blocks=1 00:06:37.729 00:06:37.729 ' 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:37.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.729 --rc genhtml_branch_coverage=1 00:06:37.729 --rc genhtml_function_coverage=1 00:06:37.729 --rc genhtml_legend=1 00:06:37.729 --rc geninfo_all_blocks=1 00:06:37.729 --rc geninfo_unexecuted_blocks=1 00:06:37.729 00:06:37.729 ' 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:37.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.729 --rc genhtml_branch_coverage=1 00:06:37.729 --rc genhtml_function_coverage=1 00:06:37.729 --rc genhtml_legend=1 00:06:37.729 --rc geninfo_all_blocks=1 00:06:37.729 --rc geninfo_unexecuted_blocks=1 00:06:37.729 00:06:37.729 ' 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:37.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.729 --rc genhtml_branch_coverage=1 00:06:37.729 --rc genhtml_function_coverage=1 00:06:37.729 --rc genhtml_legend=1 00:06:37.729 --rc geninfo_all_blocks=1 00:06:37.729 --rc geninfo_unexecuted_blocks=1 00:06:37.729 00:06:37.729 ' 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:37.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:37.729 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:37.730 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:37.730 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:37.730 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:37.730 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:37.730 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:37.730 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:37.730 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:37.730 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:37.730 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:37.730 03:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:42.998 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:42.998 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:42.998 Found net devices under 0000:af:00.0: cvl_0_0 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:42.998 Found net devices under 0000:af:00.1: cvl_0_1 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:42.998 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:42.999 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:42.999 03:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:42.999 03:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:42.999 03:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:42.999 03:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:42.999 03:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:42.999 03:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:43.257 03:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:43.257 03:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:43.257 03:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:43.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:43.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:06:43.257 00:06:43.257 --- 10.0.0.2 ping statistics --- 00:06:43.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:43.257 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:06:43.257 03:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:43.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:43.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:06:43.257 00:06:43.257 --- 10.0.0.1 ping statistics --- 00:06:43.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:43.257 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:06:43.257 03:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:43.257 03:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:43.257 03:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:43.257 03:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:43.257 03:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:43.257 03:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:43.257 03:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:43.257 03:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:43.257 03:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:43.257 03:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:43.257 03:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:43.257 03:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:43.257 03:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:43.257 03:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2489483 00:06:43.257 03:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2489483 00:06:43.257 03:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:43.257 03:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2489483 ']' 00:06:43.257 03:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.257 03:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.257 03:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.257 03:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.257 03:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:43.257 [2024-12-13 03:17:44.348910] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:43.257 [2024-12-13 03:17:44.349115] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:43.515 [2024-12-13 03:17:44.466907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.515 [2024-12-13 03:17:44.573423] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:43.515 [2024-12-13 03:17:44.573463] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:43.515 [2024-12-13 03:17:44.573473] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:43.515 [2024-12-13 03:17:44.573498] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:43.515 [2024-12-13 03:17:44.573506] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:43.515 [2024-12-13 03:17:44.575706] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.515 [2024-12-13 03:17:44.575796] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.515 [2024-12-13 03:17:44.575804] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:44.080 03:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.080 03:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:44.080 03:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:44.080 03:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:44.080 03:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:44.080 03:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:44.080 03:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:44.080 03:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:44.337 [2024-12-13 03:17:45.354105] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:44.337 03:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:44.594 03:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:44.594 [2024-12-13 03:17:45.765266] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:44.594 03:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:44.852 03:17:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:45.110 Malloc0 00:06:45.110 03:17:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:45.368 Delay0 00:06:45.368 03:17:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.626 03:17:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:45.626 NULL1 00:06:45.626 03:17:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:45.884 03:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2489987 00:06:45.884 03:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:45.884 03:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:06:45.884 03:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.142 03:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.400 03:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:46.401 03:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:46.659 true 00:06:46.659 03:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:06:46.659 03:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.659 03:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.917 03:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:46.917 03:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:47.175 true 00:06:47.175 03:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:06:47.175 03:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.434 03:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.693 03:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:47.693 03:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:47.952 true 00:06:47.952 03:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:06:47.952 03:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.952 03:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.211 03:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:48.211 03:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:48.470 true 00:06:48.470 03:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:06:48.470 03:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.728 03:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.987 03:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:48.987 03:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:48.987 true 00:06:49.246 03:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:06:49.246 03:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.246 03:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.505 03:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:49.505 03:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:49.764 true 00:06:49.764 03:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:06:49.764 03:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.023 03:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.282 03:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:50.282 03:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:50.282 true 00:06:50.282 03:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:06:50.282 03:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.542 03:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.802 03:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:50.802 03:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:51.061 true 00:06:51.061 03:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:06:51.061 03:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.319 03:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.578 03:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:51.578 03:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:51.578 true 00:06:51.578 03:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:06:51.578 03:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.836 03:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.095 03:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:52.095 03:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:52.354 true 00:06:52.354 03:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:06:52.354 03:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.613 03:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.872 03:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:52.872 03:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:52.872 true 00:06:52.872 03:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:06:52.872 03:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.131 03:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.390 03:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:53.390 03:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:53.649 true 00:06:53.649 03:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:06:53.649 03:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.908 03:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.908 03:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:53.908 03:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:54.167 true 00:06:54.167 03:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:06:54.167 03:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.424 03:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.682 03:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:54.682 03:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:54.940 true 00:06:54.940 03:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:06:54.940 03:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.940 03:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.197 03:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:55.197 03:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:55.456 true 00:06:55.456 03:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:06:55.456 03:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.714 03:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.973 03:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:55.973 03:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:55.973 true 00:06:56.232 03:17:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:06:56.232 03:17:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.232 03:17:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.490 03:17:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:56.490 03:17:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:56.749 true 00:06:56.749 03:17:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:06:56.749 03:17:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.008 03:17:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.267 03:17:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:57.267 03:17:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:57.267 true 00:06:57.267 03:17:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:06:57.267 03:17:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.525 03:17:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.784 03:17:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:57.784 03:17:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:58.043 true 00:06:58.043 03:17:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:06:58.043 03:17:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.302 03:17:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.561 03:17:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:58.561 03:17:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:58.561 true 00:06:58.561 03:17:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:06:58.561 03:17:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.820 03:17:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.079 03:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:59.079 03:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:59.337 true 00:06:59.337 03:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:06:59.337 03:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.595 03:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.854 03:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:59.854 03:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:59.854 true 00:06:59.854 03:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:06:59.854 03:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.113 03:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.371 03:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:00.372 03:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:00.629 true 00:07:00.629 03:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:07:00.629 03:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.887 03:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.146 03:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:01.146 03:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:01.146 true 00:07:01.146 03:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:07:01.146 03:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.404 03:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.662 03:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:01.662 03:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:01.919 true 00:07:01.919 03:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:07:01.919 03:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.176 03:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.435 03:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:02.435 03:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:02.435 true 00:07:02.435 03:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:07:02.435 03:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.695 03:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.954 03:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:02.954 03:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:03.213 true 00:07:03.213 03:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:07:03.213 03:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.471 03:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.730 03:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:03.730 03:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:03.730 true 00:07:03.730 03:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:07:03.730 03:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.989 03:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.248 03:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:04.248 03:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:04.507 true 00:07:04.507 03:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:07:04.507 03:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.765 03:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.765 03:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:04.765 03:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:05.023 true 00:07:05.023 03:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:07:05.024 03:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.282 03:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.541 03:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:05.541 03:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:05.800 true 00:07:05.800 03:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:07:05.800 03:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.059 03:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.059 03:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:06.059 03:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:06.318 true 00:07:06.318 03:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:07:06.318 03:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.577 03:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.835 03:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:07:06.835 03:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:07:07.095 true 00:07:07.095 03:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:07:07.095 03:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.354 03:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.612 03:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:07:07.612 03:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:07:07.612 true 00:07:07.612 03:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:07:07.612 03:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.871 03:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.130 03:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:07:08.130 03:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:07:08.389 true 00:07:08.389 03:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:07:08.389 03:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.648 03:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.906 03:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:07:08.906 03:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:07:08.906 true 00:07:09.165 03:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:07:09.165 03:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.165 03:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.424 03:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:07:09.424 03:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:07:09.683 true 00:07:09.683 03:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:07:09.683 03:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.942 03:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.201 03:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:07:10.201 03:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:07:10.460 true 00:07:10.460 03:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:07:10.460 03:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.460 03:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.719 03:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:07:10.719 03:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:07:10.977 true 00:07:10.977 03:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:07:10.978 03:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.237 03:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.496 03:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:07:11.496 03:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:07:11.496 true 00:07:11.496 03:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:07:11.496 03:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.755 03:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.014 03:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:07:12.014 03:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:07:12.273 true 00:07:12.273 03:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:07:12.273 03:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.549 03:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.855 03:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:07:12.855 03:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:07:12.855 true 00:07:12.855 03:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:07:12.855 03:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.140 03:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.447 03:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:07:13.447 03:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:07:13.447 true 00:07:13.447 03:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:07:13.447 03:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.706 03:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.965 03:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:07:13.965 03:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:07:14.223 true 00:07:14.223 03:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:07:14.223 03:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.482 03:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.741 03:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:07:14.741 03:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:07:14.741 true 00:07:14.741 03:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:07:14.741 03:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.001 03:18:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.259 03:18:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:07:15.259 03:18:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:07:15.518 true 00:07:15.518 03:18:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:07:15.518 03:18:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.777 03:18:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.036 03:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:07:16.037 03:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:07:16.037 true 00:07:16.037 03:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:07:16.037 03:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.296 Initializing NVMe Controllers 00:07:16.296 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:16.296 Controller IO queue size 128, less than required. 00:07:16.296 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:16.296 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:16.296 Initialization complete. Launching workers. 00:07:16.296 ======================================================== 00:07:16.296 Latency(us) 00:07:16.296 Device Information : IOPS MiB/s Average min max 00:07:16.296 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 23469.23 11.46 5454.04 2913.02 9411.12 00:07:16.296 ======================================================== 00:07:16.296 Total : 23469.23 11.46 5454.04 2913.02 9411.12 00:07:16.296 00:07:16.296 03:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.555 03:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:07:16.555 03:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:07:16.815 true 00:07:16.815 03:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2489987 00:07:16.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2489987) - No such process 00:07:16.815 03:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2489987 00:07:16.815 03:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.073 03:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:17.073 03:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:17.073 03:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:17.073 03:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:17.073 03:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:17.073 03:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:17.332 null0 00:07:17.332 03:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:17.332 03:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:17.332 03:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:17.591 null1 00:07:17.591 03:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:17.591 03:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:17.591 03:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:17.850 null2 00:07:17.850 03:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:17.850 03:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:17.850 03:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:17.850 null3 00:07:17.850 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:17.850 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:17.850 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:18.108 null4 00:07:18.108 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:18.108 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:18.108 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:18.367 null5 00:07:18.367 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:18.367 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:18.367 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:18.625 null6 00:07:18.625 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:18.625 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:18.625 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:18.625 null7 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:18.885 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2496194 2496195 2496197 2496199 2496202 2496203 2496205 2496207 00:07:18.886 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:18.886 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.886 03:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:18.886 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:18.886 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:18.886 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:18.886 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.886 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:18.886 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:18.886 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:18.886 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:19.144 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.144 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.144 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:19.144 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.145 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.145 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:19.145 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.145 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.145 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:19.145 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.145 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.145 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:19.145 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.145 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.145 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:19.145 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.145 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.145 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:19.145 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.145 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.145 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:19.145 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.145 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.145 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:19.403 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:19.403 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.403 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:19.403 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:19.403 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:19.403 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:19.403 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:19.403 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:19.662 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.662 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.662 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:19.662 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.662 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.662 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:19.662 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.662 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.662 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:19.662 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.662 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.663 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:19.663 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.663 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.663 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:19.663 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.663 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.663 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:19.663 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.663 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.663 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:19.663 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.663 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.663 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:19.922 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.922 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:19.922 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:19.922 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:19.922 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:19.922 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:19.922 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:19.922 03:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:19.922 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.922 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.922 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:19.922 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.923 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.923 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:19.923 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.923 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.923 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:19.923 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.923 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.923 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:19.923 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.923 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.923 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:19.923 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.923 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.923 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:19.923 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.923 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.923 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:19.923 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.923 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.923 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:20.182 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:20.182 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:20.182 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.182 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:20.182 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:20.182 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:20.182 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:20.182 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:20.441 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.441 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.441 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:20.441 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.441 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.441 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:20.441 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.441 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.441 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:20.441 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.441 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.441 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:20.441 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.441 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.441 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.441 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:20.441 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.441 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:20.441 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.441 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.441 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:20.441 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.441 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.441 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:20.700 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:20.700 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:20.700 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:20.700 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:20.700 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.700 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:20.700 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:20.700 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:20.959 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.959 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.960 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:20.960 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.960 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.960 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:20.960 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.960 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.960 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:20.960 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.960 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.960 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:20.960 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.960 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.960 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:20.960 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.960 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.960 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.960 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.960 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:20.960 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:20.960 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.960 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.960 03:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:20.960 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:20.960 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:20.960 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:20.960 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.960 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:20.960 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:20.960 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:20.960 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:21.220 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.220 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.220 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:21.220 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.220 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.220 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:21.220 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.220 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.220 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:21.220 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.220 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.220 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:21.220 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.220 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.220 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:21.220 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.220 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.220 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:21.220 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.220 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.220 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:21.220 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.220 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.220 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:21.479 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:21.479 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:21.479 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:21.479 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:21.479 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:21.479 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:21.479 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.479 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:21.737 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.737 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.737 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:21.737 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.737 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.737 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:21.737 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.737 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.737 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:21.737 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.737 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.737 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:21.737 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.737 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.737 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:21.737 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.737 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.737 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:21.737 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.737 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.737 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:21.737 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.737 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.737 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:21.996 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:21.996 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:21.996 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:21.996 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:21.996 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.996 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:21.996 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:21.996 03:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:21.996 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.996 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.996 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:21.996 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.996 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.996 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:21.996 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.996 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.996 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.996 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:21.996 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.996 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:21.996 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.996 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.996 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:21.996 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.996 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.996 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:21.996 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.256 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.256 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:22.256 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.256 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.256 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:22.256 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:22.256 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.256 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:22.256 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:22.256 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:22.256 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:22.256 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:22.256 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:22.516 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.516 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.516 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:22.516 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.516 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.516 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:22.516 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.516 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.516 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.516 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.516 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:22.516 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:22.516 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.516 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.516 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:22.516 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.516 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.516 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:22.516 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.516 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.516 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:22.516 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.516 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.516 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:22.775 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:22.775 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:22.775 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:22.775 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:22.775 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:22.775 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:22.775 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:22.775 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.034 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.034 03:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.034 03:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.034 03:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.034 03:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.034 03:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.034 03:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.034 03:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.034 03:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.034 03:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.034 03:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.034 03:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.034 03:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.034 03:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.034 03:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.034 03:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.034 03:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:23.034 03:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:23.034 03:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:23.034 03:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:23.034 03:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:23.034 03:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:23.034 03:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:23.034 03:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:23.034 rmmod nvme_tcp 00:07:23.034 rmmod nvme_fabrics 00:07:23.034 rmmod nvme_keyring 00:07:23.034 03:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:23.034 03:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:23.034 03:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:23.034 03:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2489483 ']' 00:07:23.034 03:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2489483 00:07:23.034 03:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2489483 ']' 00:07:23.034 03:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2489483 00:07:23.034 03:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:23.035 03:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.035 03:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2489483 00:07:23.035 03:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:23.035 03:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:23.035 03:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2489483' 00:07:23.035 killing process with pid 2489483 00:07:23.035 03:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2489483 00:07:23.035 03:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2489483 00:07:24.415 03:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:24.415 03:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:24.415 03:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:24.415 03:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:24.415 03:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:24.415 03:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:24.415 03:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:24.415 03:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:24.415 03:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:24.415 03:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.415 03:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:24.415 03:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.322 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:26.322 00:07:26.322 real 0m48.987s 00:07:26.322 user 3m27.096s 00:07:26.322 sys 0m16.573s 00:07:26.322 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.322 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:26.322 ************************************ 00:07:26.322 END TEST nvmf_ns_hotplug_stress 00:07:26.322 ************************************ 00:07:26.323 03:18:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:26.323 03:18:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:26.323 03:18:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.323 03:18:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:26.323 ************************************ 00:07:26.323 START TEST nvmf_delete_subsystem 00:07:26.323 ************************************ 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:26.583 * Looking for test storage... 00:07:26.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:26.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.583 --rc genhtml_branch_coverage=1 00:07:26.583 --rc genhtml_function_coverage=1 00:07:26.583 --rc genhtml_legend=1 00:07:26.583 --rc geninfo_all_blocks=1 00:07:26.583 --rc geninfo_unexecuted_blocks=1 00:07:26.583 00:07:26.583 ' 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:26.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.583 --rc genhtml_branch_coverage=1 00:07:26.583 --rc genhtml_function_coverage=1 00:07:26.583 --rc genhtml_legend=1 00:07:26.583 --rc geninfo_all_blocks=1 00:07:26.583 --rc geninfo_unexecuted_blocks=1 00:07:26.583 00:07:26.583 ' 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:26.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.583 --rc genhtml_branch_coverage=1 00:07:26.583 --rc genhtml_function_coverage=1 00:07:26.583 --rc genhtml_legend=1 00:07:26.583 --rc geninfo_all_blocks=1 00:07:26.583 --rc geninfo_unexecuted_blocks=1 00:07:26.583 00:07:26.583 ' 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:26.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.583 --rc genhtml_branch_coverage=1 00:07:26.583 --rc genhtml_function_coverage=1 00:07:26.583 --rc genhtml_legend=1 00:07:26.583 --rc geninfo_all_blocks=1 00:07:26.583 --rc geninfo_unexecuted_blocks=1 00:07:26.583 00:07:26.583 ' 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.583 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:26.584 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.584 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:26.584 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:26.584 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:26.584 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:26.584 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.584 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.584 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:26.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:26.584 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:26.584 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:26.584 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:26.584 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:26.584 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:26.584 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:26.584 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:26.584 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:26.584 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:26.584 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.584 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:26.584 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.584 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:26.584 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:26.584 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:26.584 03:18:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:31.874 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:31.874 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:31.874 Found net devices under 0000:af:00.0: cvl_0_0 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:31.874 Found net devices under 0000:af:00.1: cvl_0_1 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.874 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:31.875 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:31.875 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:31.875 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:31.875 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:31.875 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:31.875 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:31.875 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:31.875 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:31.875 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:31.875 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:31.875 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:31.875 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:31.875 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:31.875 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:31.875 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:31.875 03:18:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:31.875 03:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:31.875 03:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:31.875 03:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:32.134 03:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:32.134 03:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:32.134 03:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:32.134 03:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:32.134 03:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:32.134 03:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:32.134 03:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:32.134 03:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:32.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:32.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:07:32.134 00:07:32.134 --- 10.0.0.2 ping statistics --- 00:07:32.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.134 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:07:32.134 03:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:32.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:32.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:07:32.134 00:07:32.134 --- 10.0.0.1 ping statistics --- 00:07:32.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.134 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:07:32.134 03:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:32.134 03:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:32.134 03:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:32.134 03:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:32.134 03:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:32.134 03:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:32.134 03:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:32.134 03:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:32.134 03:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:32.134 03:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:32.134 03:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:32.134 03:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:32.134 03:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.394 03:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2500726 00:07:32.394 03:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2500726 00:07:32.394 03:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:32.394 03:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2500726 ']' 00:07:32.394 03:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.394 03:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.394 03:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.394 03:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.394 03:18:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.394 [2024-12-13 03:18:33.427792] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:32.394 [2024-12-13 03:18:33.427883] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:32.394 [2024-12-13 03:18:33.546226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:32.653 [2024-12-13 03:18:33.649496] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:32.653 [2024-12-13 03:18:33.649543] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:32.653 [2024-12-13 03:18:33.649554] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:32.653 [2024-12-13 03:18:33.649564] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:32.653 [2024-12-13 03:18:33.649574] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:32.653 [2024-12-13 03:18:33.651766] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.653 [2024-12-13 03:18:33.651774] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.221 03:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.221 03:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:33.221 03:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:33.221 03:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:33.221 03:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:33.221 03:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:33.221 03:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:33.221 03:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.221 03:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:33.221 [2024-12-13 03:18:34.267700] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:33.221 03:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.221 03:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:33.221 03:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.221 03:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:33.221 03:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.221 03:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:33.221 03:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.221 03:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:33.221 [2024-12-13 03:18:34.284078] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:33.221 03:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.222 03:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:33.222 03:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.222 03:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:33.222 NULL1 00:07:33.222 03:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.222 03:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:33.222 03:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.222 03:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:33.222 Delay0 00:07:33.222 03:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.222 03:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.222 03:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.222 03:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:33.222 03:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.222 03:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2500961 00:07:33.222 03:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:33.222 03:18:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:33.222 [2024-12-13 03:18:34.409696] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:35.125 03:18:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:35.125 03:18:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.125 03:18:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 Write completed with error (sct=0, sc=8) 00:07:35.693 Write completed with error (sct=0, sc=8) 00:07:35.693 starting I/O failed: -6 00:07:35.693 Write completed with error (sct=0, sc=8) 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 Write completed with error (sct=0, sc=8) 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 starting I/O failed: -6 00:07:35.693 Write completed with error (sct=0, sc=8) 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 Write completed with error (sct=0, sc=8) 00:07:35.693 starting I/O failed: -6 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 starting I/O failed: -6 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 starting I/O failed: -6 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 Write completed with error (sct=0, sc=8) 00:07:35.693 Write completed with error (sct=0, sc=8) 00:07:35.693 starting I/O failed: -6 00:07:35.693 Write completed with error (sct=0, sc=8) 00:07:35.693 Write completed with error (sct=0, sc=8) 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 starting I/O failed: -6 00:07:35.693 Write completed with error (sct=0, sc=8) 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 starting I/O failed: -6 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 starting I/O failed: -6 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 Write completed with error (sct=0, sc=8) 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 starting I/O failed: -6 00:07:35.693 Write completed with error (sct=0, sc=8) 00:07:35.693 Write completed with error (sct=0, sc=8) 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 starting I/O failed: -6 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 Write completed with error (sct=0, sc=8) 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 starting I/O failed: -6 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 Write completed with error (sct=0, sc=8) 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 Write completed with error (sct=0, sc=8) 00:07:35.693 starting I/O failed: -6 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 starting I/O failed: -6 00:07:35.693 Read completed with error (sct=0, sc=8) 00:07:35.693 [2024-12-13 03:18:36.707208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001ed00 is same with the state(6) to be set 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 starting I/O failed: -6 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 starting I/O failed: -6 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 starting I/O failed: -6 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 starting I/O failed: -6 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 starting I/O failed: -6 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 starting I/O failed: -6 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 starting I/O failed: -6 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 starting I/O failed: -6 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 starting I/O failed: -6 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 starting I/O failed: -6 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 [2024-12-13 03:18:36.708223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fe80 is same with the state(6) to be set 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 [2024-12-13 03:18:36.708888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(6) to be set 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Read completed with error (sct=0, sc=8) 00:07:35.694 Write completed with error (sct=0, sc=8) 00:07:35.694 [2024-12-13 03:18:36.710479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020600 is same with the state(6) to be set 00:07:36.631 [2024-12-13 03:18:37.672203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001e080 is same with the state(6) to be set 00:07:36.631 Read completed with error (sct=0, sc=8) 00:07:36.631 Read completed with error (sct=0, sc=8) 00:07:36.631 Read completed with error (sct=0, sc=8) 00:07:36.631 Write completed with error (sct=0, sc=8) 00:07:36.631 Read completed with error (sct=0, sc=8) 00:07:36.631 Read completed with error (sct=0, sc=8) 00:07:36.631 Read completed with error (sct=0, sc=8) 00:07:36.631 Write completed with error (sct=0, sc=8) 00:07:36.631 Read completed with error (sct=0, sc=8) 00:07:36.631 Write completed with error (sct=0, sc=8) 00:07:36.631 Read completed with error (sct=0, sc=8) 00:07:36.631 Read completed with error (sct=0, sc=8) 00:07:36.631 Write completed with error (sct=0, sc=8) 00:07:36.631 Write completed with error (sct=0, sc=8) 00:07:36.631 Read completed with error (sct=0, sc=8) 00:07:36.631 Read completed with error (sct=0, sc=8) 00:07:36.631 Read completed with error (sct=0, sc=8) 00:07:36.631 Write completed with error (sct=0, sc=8) 00:07:36.631 Write completed with error (sct=0, sc=8) 00:07:36.631 Read completed with error (sct=0, sc=8) 00:07:36.631 Write completed with error (sct=0, sc=8) 00:07:36.631 Read completed with error (sct=0, sc=8) 00:07:36.631 Read completed with error (sct=0, sc=8) 00:07:36.631 Write completed with error (sct=0, sc=8) 00:07:36.631 Read completed with error (sct=0, sc=8) 00:07:36.631 Write completed with error (sct=0, sc=8) 00:07:36.631 Read completed with error (sct=0, sc=8) 00:07:36.631 Read completed with error (sct=0, sc=8) 00:07:36.631 Read completed with error (sct=0, sc=8) 00:07:36.631 Read completed with error (sct=0, sc=8) 00:07:36.631 Read completed with error (sct=0, sc=8) 00:07:36.631 Read completed with error (sct=0, sc=8) 00:07:36.631 Write completed with error (sct=0, sc=8) 00:07:36.631 Read completed with error (sct=0, sc=8) 00:07:36.631 Read completed with error (sct=0, sc=8) 00:07:36.631 Read completed with error (sct=0, sc=8) 00:07:36.631 Read completed with error (sct=0, sc=8) 00:07:36.631 Read completed with error (sct=0, sc=8) 00:07:36.631 [2024-12-13 03:18:37.709445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001ef80 is same with the state(6) to be set 00:07:36.631 Read completed with error (sct=0, sc=8) 00:07:36.631 Write completed with error (sct=0, sc=8) 00:07:36.631 Write completed with error (sct=0, sc=8) 00:07:36.631 Read completed with error (sct=0, sc=8) 00:07:36.631 Read completed with error (sct=0, sc=8) 00:07:36.631 Read completed with error (sct=0, sc=8) 00:07:36.631 Write completed with error (sct=0, sc=8) 00:07:36.631 Read completed with error (sct=0, sc=8) 00:07:36.631 Write completed with error (sct=0, sc=8) 00:07:36.631 Read completed with error (sct=0, sc=8) 00:07:36.631 Read completed with error (sct=0, sc=8) 00:07:36.631 Write completed with error (sct=0, sc=8) 00:07:36.632 Write completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Write completed with error (sct=0, sc=8) 00:07:36.632 Write completed with error (sct=0, sc=8) 00:07:36.632 [2024-12-13 03:18:37.710234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020380 is same with the state(6) to be set 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Write completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Write completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Write completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Write completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Write completed with error (sct=0, sc=8) 00:07:36.632 Write completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Write completed with error (sct=0, sc=8) 00:07:36.632 Write completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Write completed with error (sct=0, sc=8) 00:07:36.632 Write completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Write completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 [2024-12-13 03:18:37.711761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001e800 is same with the state(6) to be set 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Write completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Write completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Write completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Write completed with error (sct=0, sc=8) 00:07:36.632 Write completed with error (sct=0, sc=8) 00:07:36.632 Write completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Write completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Write completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Read completed with error (sct=0, sc=8) 00:07:36.632 Write completed with error (sct=0, sc=8) 00:07:36.632 [2024-12-13 03:18:37.715815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001ea80 is same with the state(6) to be set 00:07:36.632 03:18:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.632 03:18:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:36.632 03:18:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2500961 00:07:36.632 03:18:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:36.632 Initializing NVMe Controllers 00:07:36.632 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:36.632 Controller IO queue size 128, less than required. 00:07:36.632 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:36.632 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:36.632 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:36.632 Initialization complete. Launching workers. 00:07:36.632 ======================================================== 00:07:36.632 Latency(us) 00:07:36.632 Device Information : IOPS MiB/s Average min max 00:07:36.632 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 195.42 0.10 944437.27 987.40 1014660.49 00:07:36.632 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.73 0.08 868456.45 687.15 1012564.96 00:07:36.632 ======================================================== 00:07:36.632 Total : 353.15 0.17 910502.02 687.15 1014660.49 00:07:36.632 00:07:36.632 [2024-12-13 03:18:37.720909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500001e080 (9): Bad file descriptor 00:07:36.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:37.200 03:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:37.200 03:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2500961 00:07:37.200 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2500961) - No such process 00:07:37.200 03:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2500961 00:07:37.200 03:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:37.200 03:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2500961 00:07:37.200 03:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:37.200 03:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.200 03:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:37.200 03:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.200 03:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2500961 00:07:37.200 03:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:37.200 03:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:37.200 03:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:37.200 03:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:37.200 03:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:37.200 03:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.200 03:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.200 03:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.200 03:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:37.200 03:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.200 03:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.200 [2024-12-13 03:18:38.241033] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:37.200 03:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.200 03:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.200 03:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.200 03:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.200 03:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.200 03:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2501556 00:07:37.200 03:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:37.200 03:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:37.200 03:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2501556 00:07:37.200 03:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:37.200 [2024-12-13 03:18:38.347017] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:37.768 03:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:37.768 03:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2501556 00:07:37.768 03:18:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:38.337 03:18:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:38.337 03:18:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2501556 00:07:38.337 03:18:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:38.596 03:18:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:38.596 03:18:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2501556 00:07:38.596 03:18:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:39.165 03:18:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:39.165 03:18:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2501556 00:07:39.165 03:18:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:39.733 03:18:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:39.733 03:18:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2501556 00:07:39.733 03:18:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:40.300 03:18:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:40.300 03:18:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2501556 00:07:40.300 03:18:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:40.559 Initializing NVMe Controllers 00:07:40.559 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:40.559 Controller IO queue size 128, less than required. 00:07:40.559 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:40.559 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:40.559 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:40.559 Initialization complete. Launching workers. 00:07:40.559 ======================================================== 00:07:40.559 Latency(us) 00:07:40.559 Device Information : IOPS MiB/s Average min max 00:07:40.559 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004362.79 1000291.00 1011755.77 00:07:40.559 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005011.36 1000233.99 1042309.85 00:07:40.559 ======================================================== 00:07:40.559 Total : 256.00 0.12 1004687.07 1000233.99 1042309.85 00:07:40.559 00:07:40.818 03:18:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:40.818 03:18:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2501556 00:07:40.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2501556) - No such process 00:07:40.818 03:18:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2501556 00:07:40.818 03:18:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:40.818 03:18:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:40.818 03:18:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:40.818 03:18:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:40.818 03:18:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:40.818 03:18:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:40.818 03:18:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:40.818 03:18:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:40.818 rmmod nvme_tcp 00:07:40.818 rmmod nvme_fabrics 00:07:40.818 rmmod nvme_keyring 00:07:40.818 03:18:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:40.818 03:18:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:40.818 03:18:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:40.818 03:18:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2500726 ']' 00:07:40.818 03:18:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2500726 00:07:40.818 03:18:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2500726 ']' 00:07:40.818 03:18:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2500726 00:07:40.818 03:18:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:40.818 03:18:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.818 03:18:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2500726 00:07:40.818 03:18:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:40.818 03:18:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:40.818 03:18:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2500726' 00:07:40.818 killing process with pid 2500726 00:07:40.818 03:18:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2500726 00:07:40.818 03:18:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2500726 00:07:42.196 03:18:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:42.196 03:18:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:42.196 03:18:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:42.196 03:18:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:42.196 03:18:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:42.196 03:18:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:42.196 03:18:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:42.196 03:18:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:42.196 03:18:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:42.196 03:18:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.196 03:18:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:42.196 03:18:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.101 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:44.101 00:07:44.101 real 0m17.565s 00:07:44.101 user 0m32.687s 00:07:44.101 sys 0m5.373s 00:07:44.101 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.101 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:44.101 ************************************ 00:07:44.101 END TEST nvmf_delete_subsystem 00:07:44.101 ************************************ 00:07:44.101 03:18:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:44.101 03:18:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:44.101 03:18:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.101 03:18:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:44.101 ************************************ 00:07:44.101 START TEST nvmf_host_management 00:07:44.101 ************************************ 00:07:44.101 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:44.101 * Looking for test storage... 00:07:44.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:44.101 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:44.101 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:07:44.101 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:44.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.361 --rc genhtml_branch_coverage=1 00:07:44.361 --rc genhtml_function_coverage=1 00:07:44.361 --rc genhtml_legend=1 00:07:44.361 --rc geninfo_all_blocks=1 00:07:44.361 --rc geninfo_unexecuted_blocks=1 00:07:44.361 00:07:44.361 ' 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:44.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.361 --rc genhtml_branch_coverage=1 00:07:44.361 --rc genhtml_function_coverage=1 00:07:44.361 --rc genhtml_legend=1 00:07:44.361 --rc geninfo_all_blocks=1 00:07:44.361 --rc geninfo_unexecuted_blocks=1 00:07:44.361 00:07:44.361 ' 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:44.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.361 --rc genhtml_branch_coverage=1 00:07:44.361 --rc genhtml_function_coverage=1 00:07:44.361 --rc genhtml_legend=1 00:07:44.361 --rc geninfo_all_blocks=1 00:07:44.361 --rc geninfo_unexecuted_blocks=1 00:07:44.361 00:07:44.361 ' 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:44.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.361 --rc genhtml_branch_coverage=1 00:07:44.361 --rc genhtml_function_coverage=1 00:07:44.361 --rc genhtml_legend=1 00:07:44.361 --rc geninfo_all_blocks=1 00:07:44.361 --rc geninfo_unexecuted_blocks=1 00:07:44.361 00:07:44.361 ' 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.361 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.362 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.362 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:44.362 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.362 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:44.362 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:44.362 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:44.362 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:44.362 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:44.362 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:44.362 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:44.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:44.362 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:44.362 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:44.362 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:44.362 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:44.362 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:44.362 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:44.362 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:44.362 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:44.362 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:44.362 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:44.362 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:44.362 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.362 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:44.362 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.362 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:44.362 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:44.362 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:44.362 03:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:49.639 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:49.639 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:49.639 Found net devices under 0000:af:00.0: cvl_0_0 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:49.639 Found net devices under 0000:af:00.1: cvl_0_1 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:49.639 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:49.640 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:49.640 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:49.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:49.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:07:49.640 00:07:49.640 --- 10.0.0.2 ping statistics --- 00:07:49.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.640 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:07:49.640 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:49.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:49.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:07:49.640 00:07:49.640 --- 10.0.0.1 ping statistics --- 00:07:49.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.640 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:07:49.640 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:49.640 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:49.640 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:49.640 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:49.640 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:49.640 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:49.640 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:49.640 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:49.640 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:49.640 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:49.640 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:49.640 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:49.640 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:49.640 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:49.640 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:49.640 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2505793 00:07:49.640 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2505793 00:07:49.640 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:49.640 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2505793 ']' 00:07:49.640 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.640 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.640 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.640 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.640 03:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:49.640 [2024-12-13 03:18:50.820377] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:49.640 [2024-12-13 03:18:50.820462] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.899 [2024-12-13 03:18:50.938663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:49.899 [2024-12-13 03:18:51.044600] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:49.899 [2024-12-13 03:18:51.044645] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:49.899 [2024-12-13 03:18:51.044655] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:49.899 [2024-12-13 03:18:51.044665] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:49.899 [2024-12-13 03:18:51.044673] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:49.899 [2024-12-13 03:18:51.046932] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.899 [2024-12-13 03:18:51.047022] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:49.899 [2024-12-13 03:18:51.047131] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.899 [2024-12-13 03:18:51.047153] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:07:50.467 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.467 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:50.467 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:50.467 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:50.467 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.467 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:50.467 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:50.467 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.467 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.467 [2024-12-13 03:18:51.670354] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:50.727 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.727 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:50.727 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:50.727 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.727 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:50.727 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:50.727 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:50.727 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.727 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.727 Malloc0 00:07:50.727 [2024-12-13 03:18:51.807209] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:50.727 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.727 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:50.727 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:50.727 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.727 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2505958 00:07:50.727 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2505958 /var/tmp/bdevperf.sock 00:07:50.727 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2505958 ']' 00:07:50.727 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:50.727 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:50.727 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:50.727 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.727 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:50.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:50.727 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:50.727 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.727 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:50.727 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.727 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:50.727 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:50.727 { 00:07:50.727 "params": { 00:07:50.727 "name": "Nvme$subsystem", 00:07:50.727 "trtype": "$TEST_TRANSPORT", 00:07:50.727 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:50.727 "adrfam": "ipv4", 00:07:50.727 "trsvcid": "$NVMF_PORT", 00:07:50.727 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:50.727 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:50.727 "hdgst": ${hdgst:-false}, 00:07:50.727 "ddgst": ${ddgst:-false} 00:07:50.727 }, 00:07:50.727 "method": "bdev_nvme_attach_controller" 00:07:50.727 } 00:07:50.727 EOF 00:07:50.727 )") 00:07:50.727 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:50.727 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:50.727 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:50.727 03:18:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:50.727 "params": { 00:07:50.727 "name": "Nvme0", 00:07:50.727 "trtype": "tcp", 00:07:50.727 "traddr": "10.0.0.2", 00:07:50.727 "adrfam": "ipv4", 00:07:50.727 "trsvcid": "4420", 00:07:50.727 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:50.727 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:50.727 "hdgst": false, 00:07:50.727 "ddgst": false 00:07:50.727 }, 00:07:50.727 "method": "bdev_nvme_attach_controller" 00:07:50.727 }' 00:07:50.727 [2024-12-13 03:18:51.933157] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:50.727 [2024-12-13 03:18:51.933250] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2505958 ] 00:07:50.987 [2024-12-13 03:18:52.052967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.987 [2024-12-13 03:18:52.167543] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.924 Running I/O for 10 seconds... 00:07:51.924 03:18:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.924 03:18:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:51.924 03:18:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:51.924 03:18:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.924 03:18:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.924 03:18:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.924 03:18:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:51.924 03:18:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:51.924 03:18:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:51.924 03:18:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:51.924 03:18:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:51.924 03:18:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:51.924 03:18:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:51.924 03:18:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:51.924 03:18:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:51.924 03:18:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:51.924 03:18:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.925 03:18:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.925 03:18:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.925 03:18:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:51.925 03:18:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:51.925 03:18:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:51.925 03:18:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:51.925 03:18:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:51.925 03:18:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:51.925 03:18:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:52.185 03:18:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.185 03:18:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.185 03:18:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.185 03:18:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=601 00:07:52.185 03:18:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 601 -ge 100 ']' 00:07:52.185 03:18:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:52.186 03:18:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:52.186 03:18:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:52.186 03:18:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:52.186 03:18:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.186 03:18:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.186 [2024-12-13 03:18:53.181494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:52.186 [2024-12-13 03:18:53.181547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.186 [2024-12-13 03:18:53.181562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:52.186 [2024-12-13 03:18:53.181572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.186 [2024-12-13 03:18:53.181583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:52.186 [2024-12-13 03:18:53.181593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.186 [2024-12-13 03:18:53.181604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:52.186 [2024-12-13 03:18:53.181614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.186 [2024-12-13 03:18:53.181633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:07:52.186 [2024-12-13 03:18:53.182091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.186 [2024-12-13 03:18:53.182122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.186 [2024-12-13 03:18:53.182144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.186 [2024-12-13 03:18:53.182155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.186 [2024-12-13 03:18:53.182174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.186 [2024-12-13 03:18:53.182184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.186 [2024-12-13 03:18:53.182195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.186 [2024-12-13 03:18:53.182205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.186 [2024-12-13 03:18:53.182217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.186 [2024-12-13 03:18:53.182227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.186 [2024-12-13 03:18:53.182238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.186 [2024-12-13 03:18:53.182247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.186 [2024-12-13 03:18:53.182259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.186 [2024-12-13 03:18:53.182269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.186 [2024-12-13 03:18:53.182280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.186 [2024-12-13 03:18:53.182289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.186 [2024-12-13 03:18:53.182300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.186 [2024-12-13 03:18:53.182310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.186 [2024-12-13 03:18:53.182321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.186 [2024-12-13 03:18:53.182330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.186 [2024-12-13 03:18:53.182342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.186 [2024-12-13 03:18:53.182351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.186 [2024-12-13 03:18:53.182362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.186 [2024-12-13 03:18:53.182371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.186 [2024-12-13 03:18:53.182383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.186 [2024-12-13 03:18:53.182392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.186 [2024-12-13 03:18:53.182403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.186 [2024-12-13 03:18:53.182412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.186 [2024-12-13 03:18:53.182423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.186 [2024-12-13 03:18:53.182435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.186 [2024-12-13 03:18:53.182447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.186 [2024-12-13 03:18:53.182458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.186 [2024-12-13 03:18:53.182469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.186 [2024-12-13 03:18:53.182478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.186 [2024-12-13 03:18:53.182490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.186 [2024-12-13 03:18:53.182499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.186 [2024-12-13 03:18:53.182511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.186 [2024-12-13 03:18:53.182520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.186 [2024-12-13 03:18:53.182531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.186 [2024-12-13 03:18:53.182541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.186 [2024-12-13 03:18:53.182552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.186 [2024-12-13 03:18:53.182562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.186 [2024-12-13 03:18:53.182573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.186 [2024-12-13 03:18:53.182583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.186 [2024-12-13 03:18:53.182594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.186 [2024-12-13 03:18:53.182604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.186 [2024-12-13 03:18:53.182615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.186 [2024-12-13 03:18:53.182624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.186 [2024-12-13 03:18:53.182635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.186 [2024-12-13 03:18:53.182645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.186 [2024-12-13 03:18:53.182656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.186 [2024-12-13 03:18:53.182665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.186 [2024-12-13 03:18:53.182676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.186 [2024-12-13 03:18:53.182685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.186 [2024-12-13 03:18:53.182697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.186 [2024-12-13 03:18:53.182707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.186 [2024-12-13 03:18:53.182718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.186 [2024-12-13 03:18:53.182727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.186 [2024-12-13 03:18:53.182739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.186 [2024-12-13 03:18:53.182748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.186 [2024-12-13 03:18:53.182759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.186 [2024-12-13 03:18:53.182768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.186 [2024-12-13 03:18:53.182779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.186 [2024-12-13 03:18:53.182789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.186 [2024-12-13 03:18:53.182800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.186 [2024-12-13 03:18:53.182809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.187 [2024-12-13 03:18:53.182820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.187 [2024-12-13 03:18:53.182829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.187 [2024-12-13 03:18:53.182839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.187 [2024-12-13 03:18:53.182849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.187 [2024-12-13 03:18:53.182860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.187 [2024-12-13 03:18:53.182870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.187 [2024-12-13 03:18:53.182881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.187 [2024-12-13 03:18:53.182890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.187 [2024-12-13 03:18:53.182901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.187 [2024-12-13 03:18:53.182910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.187 [2024-12-13 03:18:53.182928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.187 [2024-12-13 03:18:53.182937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.187 [2024-12-13 03:18:53.182948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.187 [2024-12-13 03:18:53.182960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.187 [2024-12-13 03:18:53.182972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.187 [2024-12-13 03:18:53.182981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.187 [2024-12-13 03:18:53.182992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.187 [2024-12-13 03:18:53.183002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.187 [2024-12-13 03:18:53.183013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.187 [2024-12-13 03:18:53.183022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.187 [2024-12-13 03:18:53.183034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.187 [2024-12-13 03:18:53.183043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.187 [2024-12-13 03:18:53.183056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.187 [2024-12-13 03:18:53.183066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.187 [2024-12-13 03:18:53.183077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.187 [2024-12-13 03:18:53.183087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.187 [2024-12-13 03:18:53.183098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.187 [2024-12-13 03:18:53.183107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.187 [2024-12-13 03:18:53.183118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.187 [2024-12-13 03:18:53.183128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.187 [2024-12-13 03:18:53.183139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.187 [2024-12-13 03:18:53.183148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.187 [2024-12-13 03:18:53.183160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.187 [2024-12-13 03:18:53.183169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.187 [2024-12-13 03:18:53.183181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.187 [2024-12-13 03:18:53.183190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.187 [2024-12-13 03:18:53.183201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.187 [2024-12-13 03:18:53.183211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.187 [2024-12-13 03:18:53.183223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.187 [2024-12-13 03:18:53.183233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.187 [2024-12-13 03:18:53.183244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.187 [2024-12-13 03:18:53.183254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.187 [2024-12-13 03:18:53.183265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.187 [2024-12-13 03:18:53.183274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.187 [2024-12-13 03:18:53.183286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.187 [2024-12-13 03:18:53.183296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.187 [2024-12-13 03:18:53.183307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.187 [2024-12-13 03:18:53.183316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.187 [2024-12-13 03:18:53.183327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.187 [2024-12-13 03:18:53.183336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.187 [2024-12-13 03:18:53.183347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.187 [2024-12-13 03:18:53.183357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.187 [2024-12-13 03:18:53.183368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.187 [2024-12-13 03:18:53.183377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.187 [2024-12-13 03:18:53.183388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.187 [2024-12-13 03:18:53.183397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.187 [2024-12-13 03:18:53.183409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.187 [2024-12-13 03:18:53.183418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.187 [2024-12-13 03:18:53.183428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.187 [2024-12-13 03:18:53.183438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.187 [2024-12-13 03:18:53.183449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.187 [2024-12-13 03:18:53.183465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.187 [2024-12-13 03:18:53.184768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:52.187 03:18:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.187 03:18:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:52.187 03:18:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.187 03:18:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.187 task offset: 85632 on job bdev=Nvme0n1 fails 00:07:52.187 00:07:52.187 Latency(us) 00:07:52.187 [2024-12-13T02:18:53.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:52.187 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:52.187 Job: Nvme0n1 ended in about 0.41 seconds with error 00:07:52.187 Verification LBA range: start 0x0 length 0x400 00:07:52.187 Nvme0n1 : 0.41 1635.30 102.21 156.44 0.00 34740.85 2028.50 31082.79 00:07:52.187 [2024-12-13T02:18:53.396Z] =================================================================================================================== 00:07:52.187 [2024-12-13T02:18:53.396Z] Total : 1635.30 102.21 156.44 0.00 34740.85 2028.50 31082.79 00:07:52.187 03:18:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.187 03:18:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:52.187 [2024-12-13 03:18:53.201020] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:52.187 [2024-12-13 03:18:53.201064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:07:52.187 [2024-12-13 03:18:53.304159] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:53.125 03:18:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2505958 00:07:53.125 03:18:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:53.125 03:18:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:53.125 03:18:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:53.125 03:18:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:53.125 03:18:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:53.125 03:18:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:53.125 03:18:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:53.125 { 00:07:53.125 "params": { 00:07:53.125 "name": "Nvme$subsystem", 00:07:53.125 "trtype": "$TEST_TRANSPORT", 00:07:53.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:53.125 "adrfam": "ipv4", 00:07:53.125 "trsvcid": "$NVMF_PORT", 00:07:53.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:53.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:53.125 "hdgst": ${hdgst:-false}, 00:07:53.125 "ddgst": ${ddgst:-false} 00:07:53.125 }, 00:07:53.125 "method": "bdev_nvme_attach_controller" 00:07:53.125 } 00:07:53.125 EOF 00:07:53.125 )") 00:07:53.125 03:18:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:53.125 03:18:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:53.125 03:18:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:53.125 03:18:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:53.125 "params": { 00:07:53.125 "name": "Nvme0", 00:07:53.125 "trtype": "tcp", 00:07:53.125 "traddr": "10.0.0.2", 00:07:53.125 "adrfam": "ipv4", 00:07:53.125 "trsvcid": "4420", 00:07:53.125 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:53.125 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:53.125 "hdgst": false, 00:07:53.125 "ddgst": false 00:07:53.125 }, 00:07:53.125 "method": "bdev_nvme_attach_controller" 00:07:53.125 }' 00:07:53.125 [2024-12-13 03:18:54.279139] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:53.125 [2024-12-13 03:18:54.279226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2506313 ] 00:07:53.384 [2024-12-13 03:18:54.391394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.384 [2024-12-13 03:18:54.507023] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.952 Running I/O for 1 seconds... 00:07:55.330 1751.00 IOPS, 109.44 MiB/s 00:07:55.330 Latency(us) 00:07:55.330 [2024-12-13T02:18:56.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.330 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:55.330 Verification LBA range: start 0x0 length 0x400 00:07:55.330 Nvme0n1 : 1.04 1791.72 111.98 0.00 0.00 35147.29 6241.52 30208.98 00:07:55.330 [2024-12-13T02:18:56.539Z] =================================================================================================================== 00:07:55.330 [2024-12-13T02:18:56.539Z] Total : 1791.72 111.98 0.00 0.00 35147.29 6241.52 30208.98 00:07:55.898 03:18:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:55.898 03:18:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:55.898 03:18:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:55.898 03:18:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:55.898 03:18:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:55.898 03:18:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:55.898 03:18:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:55.898 03:18:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:55.898 03:18:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:55.898 03:18:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:55.898 03:18:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:55.898 rmmod nvme_tcp 00:07:56.157 rmmod nvme_fabrics 00:07:56.157 rmmod nvme_keyring 00:07:56.157 03:18:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:56.157 03:18:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:56.157 03:18:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:56.157 03:18:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2505793 ']' 00:07:56.157 03:18:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2505793 00:07:56.157 03:18:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2505793 ']' 00:07:56.157 03:18:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2505793 00:07:56.157 03:18:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:56.157 03:18:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.157 03:18:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2505793 00:07:56.157 03:18:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:56.157 03:18:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:56.157 03:18:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2505793' 00:07:56.157 killing process with pid 2505793 00:07:56.157 03:18:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2505793 00:07:56.157 03:18:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2505793 00:07:57.539 [2024-12-13 03:18:58.461622] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:57.539 03:18:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:57.539 03:18:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:57.539 03:18:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:57.539 03:18:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:57.539 03:18:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:57.539 03:18:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:57.539 03:18:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:57.539 03:18:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:57.539 03:18:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:57.539 03:18:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.539 03:18:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.539 03:18:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.502 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:59.502 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:59.502 00:07:59.502 real 0m15.438s 00:07:59.502 user 0m35.033s 00:07:59.502 sys 0m5.476s 00:07:59.502 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.502 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:59.502 ************************************ 00:07:59.502 END TEST nvmf_host_management 00:07:59.502 ************************************ 00:07:59.502 03:19:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:59.502 03:19:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:59.502 03:19:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.502 03:19:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:59.502 ************************************ 00:07:59.502 START TEST nvmf_lvol 00:07:59.502 ************************************ 00:07:59.502 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:59.762 * Looking for test storage... 00:07:59.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.762 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:59.762 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:59.762 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:59.762 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:59.762 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:59.762 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:59.762 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:59.762 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:59.762 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:59.762 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:59.762 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:59.762 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:59.762 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:59.762 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:59.762 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:59.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.763 --rc genhtml_branch_coverage=1 00:07:59.763 --rc genhtml_function_coverage=1 00:07:59.763 --rc genhtml_legend=1 00:07:59.763 --rc geninfo_all_blocks=1 00:07:59.763 --rc geninfo_unexecuted_blocks=1 00:07:59.763 00:07:59.763 ' 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:59.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.763 --rc genhtml_branch_coverage=1 00:07:59.763 --rc genhtml_function_coverage=1 00:07:59.763 --rc genhtml_legend=1 00:07:59.763 --rc geninfo_all_blocks=1 00:07:59.763 --rc geninfo_unexecuted_blocks=1 00:07:59.763 00:07:59.763 ' 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:59.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.763 --rc genhtml_branch_coverage=1 00:07:59.763 --rc genhtml_function_coverage=1 00:07:59.763 --rc genhtml_legend=1 00:07:59.763 --rc geninfo_all_blocks=1 00:07:59.763 --rc geninfo_unexecuted_blocks=1 00:07:59.763 00:07:59.763 ' 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:59.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.763 --rc genhtml_branch_coverage=1 00:07:59.763 --rc genhtml_function_coverage=1 00:07:59.763 --rc genhtml_legend=1 00:07:59.763 --rc geninfo_all_blocks=1 00:07:59.763 --rc geninfo_unexecuted_blocks=1 00:07:59.763 00:07:59.763 ' 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:59.763 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:59.763 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.764 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.764 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.764 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:59.764 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:59.764 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:59.764 03:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:05.038 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:05.038 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:05.038 Found net devices under 0000:af:00.0: cvl_0_0 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:05.038 Found net devices under 0000:af:00.1: cvl_0_1 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:05.038 03:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:05.038 03:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:05.038 03:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:05.038 03:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:05.038 03:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:05.038 03:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:05.038 03:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:05.038 03:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:05.038 03:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:05.038 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:05.038 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:08:05.038 00:08:05.038 --- 10.0.0.2 ping statistics --- 00:08:05.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.038 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:08:05.038 03:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:05.038 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:05.038 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:08:05.038 00:08:05.038 --- 10.0.0.1 ping statistics --- 00:08:05.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.038 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:08:05.038 03:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:05.038 03:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:05.038 03:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:05.038 03:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:05.039 03:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:05.039 03:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:05.039 03:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:05.039 03:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:05.039 03:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:05.039 03:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:05.039 03:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:05.039 03:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:05.039 03:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:05.039 03:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2510463 00:08:05.039 03:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2510463 00:08:05.039 03:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:05.039 03:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2510463 ']' 00:08:05.039 03:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.039 03:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:05.039 03:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.039 03:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:05.039 03:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:05.297 [2024-12-13 03:19:06.319346] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:05.297 [2024-12-13 03:19:06.319437] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:05.297 [2024-12-13 03:19:06.455293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:05.556 [2024-12-13 03:19:06.565195] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:05.556 [2024-12-13 03:19:06.565240] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:05.556 [2024-12-13 03:19:06.565251] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:05.556 [2024-12-13 03:19:06.565262] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:05.556 [2024-12-13 03:19:06.565271] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:05.556 [2024-12-13 03:19:06.567569] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.556 [2024-12-13 03:19:06.567577] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.556 [2024-12-13 03:19:06.567581] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.124 03:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:06.124 03:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:06.124 03:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:06.124 03:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:06.124 03:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:06.124 03:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:06.124 03:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:06.125 [2024-12-13 03:19:07.314770] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:06.383 03:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:06.642 03:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:06.642 03:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:06.901 03:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:06.901 03:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:06.901 03:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:07.159 03:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=13517afa-d0be-4fb6-8457-28b5435e612d 00:08:07.159 03:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 13517afa-d0be-4fb6-8457-28b5435e612d lvol 20 00:08:07.418 03:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=756146c1-5066-400f-957b-aafd4d8096cf 00:08:07.418 03:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:07.677 03:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 756146c1-5066-400f-957b-aafd4d8096cf 00:08:07.678 03:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:07.936 [2024-12-13 03:19:09.019476] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:07.936 03:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:08.195 03:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:08.195 03:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2510958 00:08:08.195 03:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:09.132 03:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 756146c1-5066-400f-957b-aafd4d8096cf MY_SNAPSHOT 00:08:09.390 03:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=c1c30492-1993-4ed6-b427-c1b70af2bb40 00:08:09.390 03:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 756146c1-5066-400f-957b-aafd4d8096cf 30 00:08:09.649 03:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone c1c30492-1993-4ed6-b427-c1b70af2bb40 MY_CLONE 00:08:09.907 03:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=29f2cdf1-63fe-460e-816d-f3c1f9854ec3 00:08:09.907 03:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 29f2cdf1-63fe-460e-816d-f3c1f9854ec3 00:08:10.475 03:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2510958 00:08:18.593 Initializing NVMe Controllers 00:08:18.593 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:18.593 Controller IO queue size 128, less than required. 00:08:18.593 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:18.593 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:18.593 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:18.593 Initialization complete. Launching workers. 00:08:18.593 ======================================================== 00:08:18.593 Latency(us) 00:08:18.593 Device Information : IOPS MiB/s Average min max 00:08:18.593 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11078.50 43.28 11556.22 614.17 171031.16 00:08:18.593 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10787.20 42.14 11867.77 4712.78 146062.36 00:08:18.593 ======================================================== 00:08:18.593 Total : 21865.70 85.41 11709.92 614.17 171031.16 00:08:18.593 00:08:18.852 03:19:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:18.852 03:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 756146c1-5066-400f-957b-aafd4d8096cf 00:08:19.110 03:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 13517afa-d0be-4fb6-8457-28b5435e612d 00:08:19.369 03:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:19.369 03:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:19.369 03:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:19.369 03:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:19.369 03:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:19.369 03:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:19.369 03:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:19.369 03:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:19.369 03:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:19.369 rmmod nvme_tcp 00:08:19.369 rmmod nvme_fabrics 00:08:19.369 rmmod nvme_keyring 00:08:19.369 03:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:19.369 03:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:19.369 03:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:19.369 03:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2510463 ']' 00:08:19.369 03:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2510463 00:08:19.369 03:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2510463 ']' 00:08:19.369 03:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2510463 00:08:19.369 03:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:19.369 03:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:19.369 03:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2510463 00:08:19.369 03:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:19.369 03:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:19.369 03:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2510463' 00:08:19.369 killing process with pid 2510463 00:08:19.369 03:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2510463 00:08:19.369 03:19:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2510463 00:08:21.273 03:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:21.273 03:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:21.273 03:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:21.273 03:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:21.273 03:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:21.273 03:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:21.273 03:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:21.273 03:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:21.273 03:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:21.273 03:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.273 03:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:21.273 03:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:23.180 00:08:23.180 real 0m23.440s 00:08:23.180 user 1m8.717s 00:08:23.180 sys 0m7.078s 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:23.180 ************************************ 00:08:23.180 END TEST nvmf_lvol 00:08:23.180 ************************************ 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:23.180 ************************************ 00:08:23.180 START TEST nvmf_lvs_grow 00:08:23.180 ************************************ 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:23.180 * Looking for test storage... 00:08:23.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:23.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.180 --rc genhtml_branch_coverage=1 00:08:23.180 --rc genhtml_function_coverage=1 00:08:23.180 --rc genhtml_legend=1 00:08:23.180 --rc geninfo_all_blocks=1 00:08:23.180 --rc geninfo_unexecuted_blocks=1 00:08:23.180 00:08:23.180 ' 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:23.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.180 --rc genhtml_branch_coverage=1 00:08:23.180 --rc genhtml_function_coverage=1 00:08:23.180 --rc genhtml_legend=1 00:08:23.180 --rc geninfo_all_blocks=1 00:08:23.180 --rc geninfo_unexecuted_blocks=1 00:08:23.180 00:08:23.180 ' 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:23.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.180 --rc genhtml_branch_coverage=1 00:08:23.180 --rc genhtml_function_coverage=1 00:08:23.180 --rc genhtml_legend=1 00:08:23.180 --rc geninfo_all_blocks=1 00:08:23.180 --rc geninfo_unexecuted_blocks=1 00:08:23.180 00:08:23.180 ' 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:23.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.180 --rc genhtml_branch_coverage=1 00:08:23.180 --rc genhtml_function_coverage=1 00:08:23.180 --rc genhtml_legend=1 00:08:23.180 --rc geninfo_all_blocks=1 00:08:23.180 --rc geninfo_unexecuted_blocks=1 00:08:23.180 00:08:23.180 ' 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:23.180 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:23.181 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:23.181 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:23.181 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:23.181 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:23.181 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:23.181 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:23.181 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:23.181 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:23.181 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:23.181 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:23.181 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:23.181 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:23.181 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:23.441 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:23.441 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:23.441 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:23.441 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.441 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.441 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.441 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:23.441 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.441 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:23.441 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:23.441 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:23.441 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:23.441 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:23.441 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:23.441 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:23.441 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:23.441 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:23.441 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:23.441 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:23.441 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:23.441 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:23.441 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:23.441 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:23.441 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:23.441 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:23.441 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:23.441 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:23.441 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.441 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:23.441 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.441 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:23.441 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:23.441 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:23.441 03:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:28.724 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:28.724 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.724 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:28.725 Found net devices under 0000:af:00.0: cvl_0_0 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:28.725 Found net devices under 0000:af:00.1: cvl_0_1 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:28.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:28.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:08:28.725 00:08:28.725 --- 10.0.0.2 ping statistics --- 00:08:28.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.725 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:28.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:28.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:08:28.725 00:08:28.725 --- 10.0.0.1 ping statistics --- 00:08:28.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.725 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2516459 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2516459 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2516459 ']' 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:28.725 03:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:28.725 [2024-12-13 03:19:29.504656] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:28.725 [2024-12-13 03:19:29.504745] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.725 [2024-12-13 03:19:29.620969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.725 [2024-12-13 03:19:29.724131] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:28.725 [2024-12-13 03:19:29.724177] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:28.725 [2024-12-13 03:19:29.724187] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:28.725 [2024-12-13 03:19:29.724197] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:28.725 [2024-12-13 03:19:29.724204] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:28.725 [2024-12-13 03:19:29.725461] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.293 03:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:29.293 03:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:29.293 03:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:29.293 03:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:29.293 03:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:29.293 03:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.293 03:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:29.553 [2024-12-13 03:19:30.502841] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:29.553 03:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:29.553 03:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:29.553 03:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.553 03:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:29.553 ************************************ 00:08:29.553 START TEST lvs_grow_clean 00:08:29.553 ************************************ 00:08:29.553 03:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:29.553 03:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:29.553 03:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:29.553 03:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:29.553 03:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:29.553 03:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:29.553 03:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:29.553 03:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:29.553 03:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:29.553 03:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:29.813 03:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:29.813 03:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:29.813 03:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=cbe084ed-19a1-433f-923a-29899fd75544 00:08:29.813 03:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbe084ed-19a1-433f-923a-29899fd75544 00:08:29.813 03:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:30.072 03:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:30.072 03:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:30.072 03:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cbe084ed-19a1-433f-923a-29899fd75544 lvol 150 00:08:30.331 03:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=af470411-d26e-42c1-9f90-c3de196afba8 00:08:30.331 03:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:30.331 03:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:30.331 [2024-12-13 03:19:31.511534] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:30.331 [2024-12-13 03:19:31.511623] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:30.331 true 00:08:30.331 03:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbe084ed-19a1-433f-923a-29899fd75544 00:08:30.331 03:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:30.590 03:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:30.590 03:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:30.849 03:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 af470411-d26e-42c1-9f90-c3de196afba8 00:08:31.108 03:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:31.108 [2024-12-13 03:19:32.253849] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:31.108 03:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:31.367 03:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:31.367 03:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2517134 00:08:31.367 03:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:31.367 03:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2517134 /var/tmp/bdevperf.sock 00:08:31.367 03:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2517134 ']' 00:08:31.367 03:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:31.367 03:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:31.367 03:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:31.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:31.367 03:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:31.367 03:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:31.367 [2024-12-13 03:19:32.494795] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:31.367 [2024-12-13 03:19:32.494898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2517134 ] 00:08:31.626 [2024-12-13 03:19:32.607944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.626 [2024-12-13 03:19:32.715615] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.194 03:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.194 03:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:32.194 03:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:32.453 Nvme0n1 00:08:32.712 03:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:32.712 [ 00:08:32.712 { 00:08:32.712 "name": "Nvme0n1", 00:08:32.712 "aliases": [ 00:08:32.712 "af470411-d26e-42c1-9f90-c3de196afba8" 00:08:32.712 ], 00:08:32.712 "product_name": "NVMe disk", 00:08:32.712 "block_size": 4096, 00:08:32.712 "num_blocks": 38912, 00:08:32.712 "uuid": "af470411-d26e-42c1-9f90-c3de196afba8", 00:08:32.712 "numa_id": 1, 00:08:32.712 "assigned_rate_limits": { 00:08:32.712 "rw_ios_per_sec": 0, 00:08:32.712 "rw_mbytes_per_sec": 0, 00:08:32.712 "r_mbytes_per_sec": 0, 00:08:32.712 "w_mbytes_per_sec": 0 00:08:32.712 }, 00:08:32.712 "claimed": false, 00:08:32.712 "zoned": false, 00:08:32.712 "supported_io_types": { 00:08:32.712 "read": true, 00:08:32.712 "write": true, 00:08:32.712 "unmap": true, 00:08:32.712 "flush": true, 00:08:32.712 "reset": true, 00:08:32.713 "nvme_admin": true, 00:08:32.713 "nvme_io": true, 00:08:32.713 "nvme_io_md": false, 00:08:32.713 "write_zeroes": true, 00:08:32.713 "zcopy": false, 00:08:32.713 "get_zone_info": false, 00:08:32.713 "zone_management": false, 00:08:32.713 "zone_append": false, 00:08:32.713 "compare": true, 00:08:32.713 "compare_and_write": true, 00:08:32.713 "abort": true, 00:08:32.713 "seek_hole": false, 00:08:32.713 "seek_data": false, 00:08:32.713 "copy": true, 00:08:32.713 "nvme_iov_md": false 00:08:32.713 }, 00:08:32.713 "memory_domains": [ 00:08:32.713 { 00:08:32.713 "dma_device_id": "system", 00:08:32.713 "dma_device_type": 1 00:08:32.713 } 00:08:32.713 ], 00:08:32.713 "driver_specific": { 00:08:32.713 "nvme": [ 00:08:32.713 { 00:08:32.713 "trid": { 00:08:32.713 "trtype": "TCP", 00:08:32.713 "adrfam": "IPv4", 00:08:32.713 "traddr": "10.0.0.2", 00:08:32.713 "trsvcid": "4420", 00:08:32.713 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:32.713 }, 00:08:32.713 "ctrlr_data": { 00:08:32.713 "cntlid": 1, 00:08:32.713 "vendor_id": "0x8086", 00:08:32.713 "model_number": "SPDK bdev Controller", 00:08:32.713 "serial_number": "SPDK0", 00:08:32.713 "firmware_revision": "25.01", 00:08:32.713 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:32.713 "oacs": { 00:08:32.713 "security": 0, 00:08:32.713 "format": 0, 00:08:32.713 "firmware": 0, 00:08:32.713 "ns_manage": 0 00:08:32.713 }, 00:08:32.713 "multi_ctrlr": true, 00:08:32.713 "ana_reporting": false 00:08:32.713 }, 00:08:32.713 "vs": { 00:08:32.713 "nvme_version": "1.3" 00:08:32.713 }, 00:08:32.713 "ns_data": { 00:08:32.713 "id": 1, 00:08:32.713 "can_share": true 00:08:32.713 } 00:08:32.713 } 00:08:32.713 ], 00:08:32.713 "mp_policy": "active_passive" 00:08:32.713 } 00:08:32.713 } 00:08:32.713 ] 00:08:32.713 03:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2517369 00:08:32.713 03:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:32.713 03:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:32.972 Running I/O for 10 seconds... 00:08:33.909 Latency(us) 00:08:33.909 [2024-12-13T02:19:35.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.909 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.909 Nvme0n1 : 1.00 20336.00 79.44 0.00 0.00 0.00 0.00 0.00 00:08:33.909 [2024-12-13T02:19:35.118Z] =================================================================================================================== 00:08:33.909 [2024-12-13T02:19:35.118Z] Total : 20336.00 79.44 0.00 0.00 0.00 0.00 0.00 00:08:33.909 00:08:34.845 03:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cbe084ed-19a1-433f-923a-29899fd75544 00:08:34.845 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.845 Nvme0n1 : 2.00 20520.00 80.16 0.00 0.00 0.00 0.00 0.00 00:08:34.845 [2024-12-13T02:19:36.054Z] =================================================================================================================== 00:08:34.845 [2024-12-13T02:19:36.055Z] Total : 20520.00 80.16 0.00 0.00 0.00 0.00 0.00 00:08:34.846 00:08:35.104 true 00:08:35.104 03:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbe084ed-19a1-433f-923a-29899fd75544 00:08:35.104 03:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:35.104 03:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:35.104 03:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:35.104 03:19:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2517369 00:08:36.042 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.042 Nvme0n1 : 3.00 20559.67 80.31 0.00 0.00 0.00 0.00 0.00 00:08:36.042 [2024-12-13T02:19:37.251Z] =================================================================================================================== 00:08:36.042 [2024-12-13T02:19:37.251Z] Total : 20559.67 80.31 0.00 0.00 0.00 0.00 0.00 00:08:36.042 00:08:36.977 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.977 Nvme0n1 : 4.00 20632.00 80.59 0.00 0.00 0.00 0.00 0.00 00:08:36.977 [2024-12-13T02:19:38.186Z] =================================================================================================================== 00:08:36.977 [2024-12-13T02:19:38.186Z] Total : 20632.00 80.59 0.00 0.00 0.00 0.00 0.00 00:08:36.977 00:08:37.913 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.913 Nvme0n1 : 5.00 20667.80 80.73 0.00 0.00 0.00 0.00 0.00 00:08:37.913 [2024-12-13T02:19:39.122Z] =================================================================================================================== 00:08:37.913 [2024-12-13T02:19:39.122Z] Total : 20667.80 80.73 0.00 0.00 0.00 0.00 0.00 00:08:37.913 00:08:38.849 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.849 Nvme0n1 : 6.00 20706.67 80.89 0.00 0.00 0.00 0.00 0.00 00:08:38.849 [2024-12-13T02:19:40.058Z] =================================================================================================================== 00:08:38.849 [2024-12-13T02:19:40.058Z] Total : 20706.67 80.89 0.00 0.00 0.00 0.00 0.00 00:08:38.849 00:08:39.786 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.786 Nvme0n1 : 7.00 20652.29 80.67 0.00 0.00 0.00 0.00 0.00 00:08:39.786 [2024-12-13T02:19:40.995Z] =================================================================================================================== 00:08:39.786 [2024-12-13T02:19:40.995Z] Total : 20652.29 80.67 0.00 0.00 0.00 0.00 0.00 00:08:39.786 00:08:41.164 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.164 Nvme0n1 : 8.00 20684.75 80.80 0.00 0.00 0.00 0.00 0.00 00:08:41.164 [2024-12-13T02:19:42.373Z] =================================================================================================================== 00:08:41.164 [2024-12-13T02:19:42.373Z] Total : 20684.75 80.80 0.00 0.00 0.00 0.00 0.00 00:08:41.164 00:08:42.100 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.100 Nvme0n1 : 9.00 20715.11 80.92 0.00 0.00 0.00 0.00 0.00 00:08:42.100 [2024-12-13T02:19:43.309Z] =================================================================================================================== 00:08:42.100 [2024-12-13T02:19:43.309Z] Total : 20715.11 80.92 0.00 0.00 0.00 0.00 0.00 00:08:42.100 00:08:43.068 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.068 Nvme0n1 : 10.00 20740.00 81.02 0.00 0.00 0.00 0.00 0.00 00:08:43.068 [2024-12-13T02:19:44.277Z] =================================================================================================================== 00:08:43.068 [2024-12-13T02:19:44.277Z] Total : 20740.00 81.02 0.00 0.00 0.00 0.00 0.00 00:08:43.068 00:08:43.068 00:08:43.068 Latency(us) 00:08:43.068 [2024-12-13T02:19:44.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.068 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.068 Nvme0n1 : 10.00 20742.63 81.03 0.00 0.00 6167.79 3791.73 12420.63 00:08:43.068 [2024-12-13T02:19:44.277Z] =================================================================================================================== 00:08:43.068 [2024-12-13T02:19:44.277Z] Total : 20742.63 81.03 0.00 0.00 6167.79 3791.73 12420.63 00:08:43.068 { 00:08:43.068 "results": [ 00:08:43.068 { 00:08:43.068 "job": "Nvme0n1", 00:08:43.068 "core_mask": "0x2", 00:08:43.068 "workload": "randwrite", 00:08:43.068 "status": "finished", 00:08:43.068 "queue_depth": 128, 00:08:43.068 "io_size": 4096, 00:08:43.068 "runtime": 10.004904, 00:08:43.068 "iops": 20742.627815319367, 00:08:43.068 "mibps": 81.02588990359128, 00:08:43.068 "io_failed": 0, 00:08:43.068 "io_timeout": 0, 00:08:43.068 "avg_latency_us": 6167.789628993264, 00:08:43.068 "min_latency_us": 3791.7257142857143, 00:08:43.068 "max_latency_us": 12420.63238095238 00:08:43.068 } 00:08:43.068 ], 00:08:43.068 "core_count": 1 00:08:43.068 } 00:08:43.068 03:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2517134 00:08:43.068 03:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2517134 ']' 00:08:43.068 03:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2517134 00:08:43.068 03:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:43.068 03:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:43.068 03:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2517134 00:08:43.068 03:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:43.068 03:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:43.068 03:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2517134' 00:08:43.068 killing process with pid 2517134 00:08:43.068 03:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2517134 00:08:43.068 Received shutdown signal, test time was about 10.000000 seconds 00:08:43.068 00:08:43.068 Latency(us) 00:08:43.068 [2024-12-13T02:19:44.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.068 [2024-12-13T02:19:44.277Z] =================================================================================================================== 00:08:43.068 [2024-12-13T02:19:44.277Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:43.068 03:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2517134 00:08:44.006 03:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:44.006 03:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:44.264 03:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbe084ed-19a1-433f-923a-29899fd75544 00:08:44.264 03:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:44.523 03:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:44.523 03:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:44.523 03:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:44.523 [2024-12-13 03:19:45.665731] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:44.523 03:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbe084ed-19a1-433f-923a-29899fd75544 00:08:44.523 03:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:44.523 03:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbe084ed-19a1-433f-923a-29899fd75544 00:08:44.523 03:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:44.523 03:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:44.523 03:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:44.523 03:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:44.523 03:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:44.524 03:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:44.524 03:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:44.524 03:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:44.524 03:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbe084ed-19a1-433f-923a-29899fd75544 00:08:44.782 request: 00:08:44.782 { 00:08:44.782 "uuid": "cbe084ed-19a1-433f-923a-29899fd75544", 00:08:44.782 "method": "bdev_lvol_get_lvstores", 00:08:44.782 "req_id": 1 00:08:44.782 } 00:08:44.782 Got JSON-RPC error response 00:08:44.782 response: 00:08:44.782 { 00:08:44.782 "code": -19, 00:08:44.782 "message": "No such device" 00:08:44.782 } 00:08:44.782 03:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:44.782 03:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:44.782 03:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:44.782 03:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:44.782 03:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:45.041 aio_bdev 00:08:45.041 03:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev af470411-d26e-42c1-9f90-c3de196afba8 00:08:45.041 03:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=af470411-d26e-42c1-9f90-c3de196afba8 00:08:45.041 03:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:45.041 03:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:45.041 03:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:45.041 03:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:45.041 03:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:45.300 03:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b af470411-d26e-42c1-9f90-c3de196afba8 -t 2000 00:08:45.300 [ 00:08:45.300 { 00:08:45.300 "name": "af470411-d26e-42c1-9f90-c3de196afba8", 00:08:45.300 "aliases": [ 00:08:45.300 "lvs/lvol" 00:08:45.300 ], 00:08:45.300 "product_name": "Logical Volume", 00:08:45.300 "block_size": 4096, 00:08:45.300 "num_blocks": 38912, 00:08:45.300 "uuid": "af470411-d26e-42c1-9f90-c3de196afba8", 00:08:45.300 "assigned_rate_limits": { 00:08:45.300 "rw_ios_per_sec": 0, 00:08:45.300 "rw_mbytes_per_sec": 0, 00:08:45.300 "r_mbytes_per_sec": 0, 00:08:45.300 "w_mbytes_per_sec": 0 00:08:45.300 }, 00:08:45.300 "claimed": false, 00:08:45.300 "zoned": false, 00:08:45.300 "supported_io_types": { 00:08:45.300 "read": true, 00:08:45.300 "write": true, 00:08:45.300 "unmap": true, 00:08:45.300 "flush": false, 00:08:45.300 "reset": true, 00:08:45.300 "nvme_admin": false, 00:08:45.300 "nvme_io": false, 00:08:45.300 "nvme_io_md": false, 00:08:45.300 "write_zeroes": true, 00:08:45.300 "zcopy": false, 00:08:45.300 "get_zone_info": false, 00:08:45.300 "zone_management": false, 00:08:45.300 "zone_append": false, 00:08:45.300 "compare": false, 00:08:45.300 "compare_and_write": false, 00:08:45.300 "abort": false, 00:08:45.300 "seek_hole": true, 00:08:45.300 "seek_data": true, 00:08:45.300 "copy": false, 00:08:45.300 "nvme_iov_md": false 00:08:45.300 }, 00:08:45.300 "driver_specific": { 00:08:45.300 "lvol": { 00:08:45.300 "lvol_store_uuid": "cbe084ed-19a1-433f-923a-29899fd75544", 00:08:45.300 "base_bdev": "aio_bdev", 00:08:45.300 "thin_provision": false, 00:08:45.300 "num_allocated_clusters": 38, 00:08:45.300 "snapshot": false, 00:08:45.300 "clone": false, 00:08:45.300 "esnap_clone": false 00:08:45.300 } 00:08:45.300 } 00:08:45.300 } 00:08:45.300 ] 00:08:45.300 03:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:45.300 03:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbe084ed-19a1-433f-923a-29899fd75544 00:08:45.300 03:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:45.560 03:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:45.560 03:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cbe084ed-19a1-433f-923a-29899fd75544 00:08:45.560 03:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:45.820 03:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:45.820 03:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete af470411-d26e-42c1-9f90-c3de196afba8 00:08:46.079 03:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cbe084ed-19a1-433f-923a-29899fd75544 00:08:46.079 03:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:46.338 03:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:46.338 00:08:46.338 real 0m16.910s 00:08:46.338 user 0m16.703s 00:08:46.338 sys 0m1.502s 00:08:46.338 03:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.338 03:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:46.338 ************************************ 00:08:46.338 END TEST lvs_grow_clean 00:08:46.338 ************************************ 00:08:46.338 03:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:46.338 03:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:46.338 03:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.338 03:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:46.338 ************************************ 00:08:46.338 START TEST lvs_grow_dirty 00:08:46.338 ************************************ 00:08:46.338 03:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:46.338 03:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:46.338 03:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:46.338 03:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:46.338 03:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:46.338 03:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:46.338 03:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:46.338 03:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:46.597 03:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:46.597 03:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:46.597 03:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:46.597 03:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:46.857 03:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3f3a795e-12f5-4e03-a298-22a28d5a3e9f 00:08:46.857 03:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f3a795e-12f5-4e03-a298-22a28d5a3e9f 00:08:46.857 03:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:47.116 03:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:47.116 03:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:47.116 03:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3f3a795e-12f5-4e03-a298-22a28d5a3e9f lvol 150 00:08:47.116 03:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=cbe9196a-ecd6-44ae-ba26-a6fa4fc952f2 00:08:47.116 03:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:47.116 03:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:47.376 [2024-12-13 03:19:48.499006] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:47.376 [2024-12-13 03:19:48.499084] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:47.376 true 00:08:47.376 03:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f3a795e-12f5-4e03-a298-22a28d5a3e9f 00:08:47.376 03:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:47.637 03:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:47.637 03:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:47.896 03:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cbe9196a-ecd6-44ae-ba26-a6fa4fc952f2 00:08:47.896 03:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:48.155 [2024-12-13 03:19:49.253403] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:48.155 03:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:48.414 03:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2519896 00:08:48.414 03:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:48.414 03:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2519896 /var/tmp/bdevperf.sock 00:08:48.414 03:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:48.414 03:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2519896 ']' 00:08:48.414 03:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:48.414 03:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:48.414 03:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:48.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:48.414 03:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:48.414 03:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:48.414 [2024-12-13 03:19:49.522492] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:48.414 [2024-12-13 03:19:49.522577] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2519896 ] 00:08:48.673 [2024-12-13 03:19:49.634991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.673 [2024-12-13 03:19:49.744622] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.240 03:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.240 03:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:49.240 03:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:49.499 Nvme0n1 00:08:49.499 03:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:49.758 [ 00:08:49.758 { 00:08:49.758 "name": "Nvme0n1", 00:08:49.758 "aliases": [ 00:08:49.758 "cbe9196a-ecd6-44ae-ba26-a6fa4fc952f2" 00:08:49.758 ], 00:08:49.758 "product_name": "NVMe disk", 00:08:49.758 "block_size": 4096, 00:08:49.758 "num_blocks": 38912, 00:08:49.758 "uuid": "cbe9196a-ecd6-44ae-ba26-a6fa4fc952f2", 00:08:49.758 "numa_id": 1, 00:08:49.758 "assigned_rate_limits": { 00:08:49.758 "rw_ios_per_sec": 0, 00:08:49.758 "rw_mbytes_per_sec": 0, 00:08:49.758 "r_mbytes_per_sec": 0, 00:08:49.758 "w_mbytes_per_sec": 0 00:08:49.758 }, 00:08:49.758 "claimed": false, 00:08:49.758 "zoned": false, 00:08:49.758 "supported_io_types": { 00:08:49.758 "read": true, 00:08:49.758 "write": true, 00:08:49.758 "unmap": true, 00:08:49.758 "flush": true, 00:08:49.758 "reset": true, 00:08:49.758 "nvme_admin": true, 00:08:49.758 "nvme_io": true, 00:08:49.758 "nvme_io_md": false, 00:08:49.758 "write_zeroes": true, 00:08:49.758 "zcopy": false, 00:08:49.758 "get_zone_info": false, 00:08:49.758 "zone_management": false, 00:08:49.758 "zone_append": false, 00:08:49.758 "compare": true, 00:08:49.758 "compare_and_write": true, 00:08:49.758 "abort": true, 00:08:49.758 "seek_hole": false, 00:08:49.758 "seek_data": false, 00:08:49.758 "copy": true, 00:08:49.758 "nvme_iov_md": false 00:08:49.758 }, 00:08:49.758 "memory_domains": [ 00:08:49.758 { 00:08:49.758 "dma_device_id": "system", 00:08:49.758 "dma_device_type": 1 00:08:49.758 } 00:08:49.758 ], 00:08:49.758 "driver_specific": { 00:08:49.758 "nvme": [ 00:08:49.758 { 00:08:49.758 "trid": { 00:08:49.758 "trtype": "TCP", 00:08:49.758 "adrfam": "IPv4", 00:08:49.758 "traddr": "10.0.0.2", 00:08:49.758 "trsvcid": "4420", 00:08:49.758 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:49.758 }, 00:08:49.758 "ctrlr_data": { 00:08:49.758 "cntlid": 1, 00:08:49.758 "vendor_id": "0x8086", 00:08:49.758 "model_number": "SPDK bdev Controller", 00:08:49.758 "serial_number": "SPDK0", 00:08:49.758 "firmware_revision": "25.01", 00:08:49.758 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:49.758 "oacs": { 00:08:49.758 "security": 0, 00:08:49.758 "format": 0, 00:08:49.758 "firmware": 0, 00:08:49.758 "ns_manage": 0 00:08:49.758 }, 00:08:49.758 "multi_ctrlr": true, 00:08:49.758 "ana_reporting": false 00:08:49.758 }, 00:08:49.758 "vs": { 00:08:49.758 "nvme_version": "1.3" 00:08:49.758 }, 00:08:49.758 "ns_data": { 00:08:49.758 "id": 1, 00:08:49.758 "can_share": true 00:08:49.758 } 00:08:49.758 } 00:08:49.758 ], 00:08:49.758 "mp_policy": "active_passive" 00:08:49.758 } 00:08:49.758 } 00:08:49.758 ] 00:08:49.758 03:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2520122 00:08:49.758 03:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:49.758 03:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:49.758 Running I/O for 10 seconds... 00:08:51.239 Latency(us) 00:08:51.239 [2024-12-13T02:19:52.448Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:51.239 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.239 Nvme0n1 : 1.00 20640.00 80.62 0.00 0.00 0.00 0.00 0.00 00:08:51.239 [2024-12-13T02:19:52.448Z] =================================================================================================================== 00:08:51.239 [2024-12-13T02:19:52.448Z] Total : 20640.00 80.62 0.00 0.00 0.00 0.00 0.00 00:08:51.239 00:08:51.832 03:19:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3f3a795e-12f5-4e03-a298-22a28d5a3e9f 00:08:51.832 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.832 Nvme0n1 : 2.00 20670.50 80.74 0.00 0.00 0.00 0.00 0.00 00:08:51.832 [2024-12-13T02:19:53.041Z] =================================================================================================================== 00:08:51.832 [2024-12-13T02:19:53.041Z] Total : 20670.50 80.74 0.00 0.00 0.00 0.00 0.00 00:08:51.832 00:08:52.091 true 00:08:52.091 03:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f3a795e-12f5-4e03-a298-22a28d5a3e9f 00:08:52.091 03:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:52.349 03:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:52.349 03:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:52.349 03:19:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2520122 00:08:52.917 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.917 Nvme0n1 : 3.00 20729.67 80.98 0.00 0.00 0.00 0.00 0.00 00:08:52.917 [2024-12-13T02:19:54.126Z] =================================================================================================================== 00:08:52.917 [2024-12-13T02:19:54.126Z] Total : 20729.67 80.98 0.00 0.00 0.00 0.00 0.00 00:08:52.917 00:08:53.853 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.853 Nvme0n1 : 4.00 20739.50 81.01 0.00 0.00 0.00 0.00 0.00 00:08:53.853 [2024-12-13T02:19:55.062Z] =================================================================================================================== 00:08:53.853 [2024-12-13T02:19:55.062Z] Total : 20739.50 81.01 0.00 0.00 0.00 0.00 0.00 00:08:53.853 00:08:54.787 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.787 Nvme0n1 : 5.00 20786.80 81.20 0.00 0.00 0.00 0.00 0.00 00:08:54.787 [2024-12-13T02:19:55.996Z] =================================================================================================================== 00:08:54.787 [2024-12-13T02:19:55.996Z] Total : 20786.80 81.20 0.00 0.00 0.00 0.00 0.00 00:08:54.787 00:08:56.165 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.165 Nvme0n1 : 6.00 20828.50 81.36 0.00 0.00 0.00 0.00 0.00 00:08:56.165 [2024-12-13T02:19:57.374Z] =================================================================================================================== 00:08:56.165 [2024-12-13T02:19:57.374Z] Total : 20828.50 81.36 0.00 0.00 0.00 0.00 0.00 00:08:56.165 00:08:57.102 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.102 Nvme0n1 : 7.00 20840.29 81.41 0.00 0.00 0.00 0.00 0.00 00:08:57.102 [2024-12-13T02:19:58.311Z] =================================================================================================================== 00:08:57.102 [2024-12-13T02:19:58.311Z] Total : 20840.29 81.41 0.00 0.00 0.00 0.00 0.00 00:08:57.102 00:08:58.035 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.035 Nvme0n1 : 8.00 20870.88 81.53 0.00 0.00 0.00 0.00 0.00 00:08:58.035 [2024-12-13T02:19:59.244Z] =================================================================================================================== 00:08:58.035 [2024-12-13T02:19:59.244Z] Total : 20870.88 81.53 0.00 0.00 0.00 0.00 0.00 00:08:58.035 00:08:58.971 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.971 Nvme0n1 : 9.00 20892.00 81.61 0.00 0.00 0.00 0.00 0.00 00:08:58.971 [2024-12-13T02:20:00.180Z] =================================================================================================================== 00:08:58.971 [2024-12-13T02:20:00.180Z] Total : 20892.00 81.61 0.00 0.00 0.00 0.00 0.00 00:08:58.971 00:08:59.907 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.907 Nvme0n1 : 10.00 20874.10 81.54 0.00 0.00 0.00 0.00 0.00 00:08:59.907 [2024-12-13T02:20:01.116Z] =================================================================================================================== 00:08:59.907 [2024-12-13T02:20:01.116Z] Total : 20874.10 81.54 0.00 0.00 0.00 0.00 0.00 00:08:59.907 00:08:59.907 00:08:59.907 Latency(us) 00:08:59.907 [2024-12-13T02:20:01.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.907 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.908 Nvme0n1 : 10.00 20874.15 81.54 0.00 0.00 6128.44 1708.62 12545.46 00:08:59.908 [2024-12-13T02:20:01.117Z] =================================================================================================================== 00:08:59.908 [2024-12-13T02:20:01.117Z] Total : 20874.15 81.54 0.00 0.00 6128.44 1708.62 12545.46 00:08:59.908 { 00:08:59.908 "results": [ 00:08:59.908 { 00:08:59.908 "job": "Nvme0n1", 00:08:59.908 "core_mask": "0x2", 00:08:59.908 "workload": "randwrite", 00:08:59.908 "status": "finished", 00:08:59.908 "queue_depth": 128, 00:08:59.908 "io_size": 4096, 00:08:59.908 "runtime": 10.00304, 00:08:59.908 "iops": 20874.154257105838, 00:08:59.908 "mibps": 81.53966506681968, 00:08:59.908 "io_failed": 0, 00:08:59.908 "io_timeout": 0, 00:08:59.908 "avg_latency_us": 6128.4432884908565, 00:08:59.908 "min_latency_us": 1708.6171428571429, 00:08:59.908 "max_latency_us": 12545.462857142857 00:08:59.908 } 00:08:59.908 ], 00:08:59.908 "core_count": 1 00:08:59.908 } 00:08:59.908 03:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2519896 00:08:59.908 03:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2519896 ']' 00:08:59.908 03:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2519896 00:08:59.908 03:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:59.908 03:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:59.908 03:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2519896 00:08:59.908 03:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:59.908 03:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:59.908 03:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2519896' 00:08:59.908 killing process with pid 2519896 00:08:59.908 03:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2519896 00:08:59.908 Received shutdown signal, test time was about 10.000000 seconds 00:08:59.908 00:08:59.908 Latency(us) 00:08:59.908 [2024-12-13T02:20:01.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.908 [2024-12-13T02:20:01.117Z] =================================================================================================================== 00:08:59.908 [2024-12-13T02:20:01.117Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:59.908 03:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2519896 00:09:00.845 03:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:01.104 03:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:01.366 03:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f3a795e-12f5-4e03-a298-22a28d5a3e9f 00:09:01.366 03:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:01.366 03:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:01.366 03:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:01.366 03:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2516459 00:09:01.366 03:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2516459 00:09:01.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2516459 Killed "${NVMF_APP[@]}" "$@" 00:09:01.366 03:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:01.366 03:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:01.366 03:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:01.366 03:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:01.366 03:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:01.366 03:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2522150 00:09:01.366 03:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2522150 00:09:01.366 03:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2522150 ']' 00:09:01.366 03:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.366 03:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.366 03:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:01.366 03:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.366 03:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.366 03:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:01.628 [2024-12-13 03:20:02.649295] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:01.628 [2024-12-13 03:20:02.649375] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.628 [2024-12-13 03:20:02.767188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.887 [2024-12-13 03:20:02.865039] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.887 [2024-12-13 03:20:02.865082] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.887 [2024-12-13 03:20:02.865092] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.887 [2024-12-13 03:20:02.865102] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.887 [2024-12-13 03:20:02.865112] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.887 [2024-12-13 03:20:02.866339] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.456 03:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.456 03:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:02.456 03:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:02.456 03:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:02.456 03:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:02.456 03:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.456 03:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:02.456 [2024-12-13 03:20:03.640613] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:02.456 [2024-12-13 03:20:03.640770] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:02.456 [2024-12-13 03:20:03.640806] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:02.456 03:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:02.456 03:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev cbe9196a-ecd6-44ae-ba26-a6fa4fc952f2 00:09:02.456 03:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=cbe9196a-ecd6-44ae-ba26-a6fa4fc952f2 00:09:02.456 03:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:02.456 03:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:02.456 03:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:02.456 03:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:02.456 03:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:02.715 03:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cbe9196a-ecd6-44ae-ba26-a6fa4fc952f2 -t 2000 00:09:02.974 [ 00:09:02.974 { 00:09:02.974 "name": "cbe9196a-ecd6-44ae-ba26-a6fa4fc952f2", 00:09:02.974 "aliases": [ 00:09:02.974 "lvs/lvol" 00:09:02.974 ], 00:09:02.974 "product_name": "Logical Volume", 00:09:02.974 "block_size": 4096, 00:09:02.974 "num_blocks": 38912, 00:09:02.974 "uuid": "cbe9196a-ecd6-44ae-ba26-a6fa4fc952f2", 00:09:02.974 "assigned_rate_limits": { 00:09:02.974 "rw_ios_per_sec": 0, 00:09:02.974 "rw_mbytes_per_sec": 0, 00:09:02.974 "r_mbytes_per_sec": 0, 00:09:02.974 "w_mbytes_per_sec": 0 00:09:02.974 }, 00:09:02.974 "claimed": false, 00:09:02.974 "zoned": false, 00:09:02.974 "supported_io_types": { 00:09:02.974 "read": true, 00:09:02.974 "write": true, 00:09:02.974 "unmap": true, 00:09:02.974 "flush": false, 00:09:02.974 "reset": true, 00:09:02.974 "nvme_admin": false, 00:09:02.974 "nvme_io": false, 00:09:02.974 "nvme_io_md": false, 00:09:02.974 "write_zeroes": true, 00:09:02.974 "zcopy": false, 00:09:02.974 "get_zone_info": false, 00:09:02.974 "zone_management": false, 00:09:02.974 "zone_append": false, 00:09:02.974 "compare": false, 00:09:02.974 "compare_and_write": false, 00:09:02.974 "abort": false, 00:09:02.974 "seek_hole": true, 00:09:02.974 "seek_data": true, 00:09:02.974 "copy": false, 00:09:02.974 "nvme_iov_md": false 00:09:02.974 }, 00:09:02.974 "driver_specific": { 00:09:02.974 "lvol": { 00:09:02.974 "lvol_store_uuid": "3f3a795e-12f5-4e03-a298-22a28d5a3e9f", 00:09:02.974 "base_bdev": "aio_bdev", 00:09:02.974 "thin_provision": false, 00:09:02.974 "num_allocated_clusters": 38, 00:09:02.974 "snapshot": false, 00:09:02.974 "clone": false, 00:09:02.974 "esnap_clone": false 00:09:02.974 } 00:09:02.974 } 00:09:02.974 } 00:09:02.974 ] 00:09:02.974 03:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:02.974 03:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f3a795e-12f5-4e03-a298-22a28d5a3e9f 00:09:02.974 03:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:03.234 03:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:03.234 03:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f3a795e-12f5-4e03-a298-22a28d5a3e9f 00:09:03.234 03:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:03.234 03:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:03.234 03:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:03.493 [2024-12-13 03:20:04.553011] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:03.493 03:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f3a795e-12f5-4e03-a298-22a28d5a3e9f 00:09:03.493 03:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:03.493 03:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f3a795e-12f5-4e03-a298-22a28d5a3e9f 00:09:03.493 03:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:03.493 03:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:03.493 03:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:03.493 03:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:03.493 03:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:03.493 03:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:03.493 03:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:03.493 03:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:03.493 03:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f3a795e-12f5-4e03-a298-22a28d5a3e9f 00:09:03.752 request: 00:09:03.752 { 00:09:03.752 "uuid": "3f3a795e-12f5-4e03-a298-22a28d5a3e9f", 00:09:03.752 "method": "bdev_lvol_get_lvstores", 00:09:03.752 "req_id": 1 00:09:03.752 } 00:09:03.752 Got JSON-RPC error response 00:09:03.752 response: 00:09:03.752 { 00:09:03.752 "code": -19, 00:09:03.752 "message": "No such device" 00:09:03.752 } 00:09:03.752 03:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:03.752 03:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:03.752 03:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:03.752 03:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:03.752 03:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:04.012 aio_bdev 00:09:04.012 03:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev cbe9196a-ecd6-44ae-ba26-a6fa4fc952f2 00:09:04.012 03:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=cbe9196a-ecd6-44ae-ba26-a6fa4fc952f2 00:09:04.012 03:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:04.012 03:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:04.012 03:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:04.012 03:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:04.012 03:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:04.012 03:20:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cbe9196a-ecd6-44ae-ba26-a6fa4fc952f2 -t 2000 00:09:04.270 [ 00:09:04.270 { 00:09:04.270 "name": "cbe9196a-ecd6-44ae-ba26-a6fa4fc952f2", 00:09:04.270 "aliases": [ 00:09:04.270 "lvs/lvol" 00:09:04.270 ], 00:09:04.270 "product_name": "Logical Volume", 00:09:04.270 "block_size": 4096, 00:09:04.270 "num_blocks": 38912, 00:09:04.270 "uuid": "cbe9196a-ecd6-44ae-ba26-a6fa4fc952f2", 00:09:04.270 "assigned_rate_limits": { 00:09:04.270 "rw_ios_per_sec": 0, 00:09:04.270 "rw_mbytes_per_sec": 0, 00:09:04.270 "r_mbytes_per_sec": 0, 00:09:04.270 "w_mbytes_per_sec": 0 00:09:04.270 }, 00:09:04.270 "claimed": false, 00:09:04.270 "zoned": false, 00:09:04.270 "supported_io_types": { 00:09:04.270 "read": true, 00:09:04.270 "write": true, 00:09:04.270 "unmap": true, 00:09:04.270 "flush": false, 00:09:04.270 "reset": true, 00:09:04.270 "nvme_admin": false, 00:09:04.270 "nvme_io": false, 00:09:04.270 "nvme_io_md": false, 00:09:04.270 "write_zeroes": true, 00:09:04.270 "zcopy": false, 00:09:04.270 "get_zone_info": false, 00:09:04.270 "zone_management": false, 00:09:04.270 "zone_append": false, 00:09:04.270 "compare": false, 00:09:04.270 "compare_and_write": false, 00:09:04.270 "abort": false, 00:09:04.270 "seek_hole": true, 00:09:04.270 "seek_data": true, 00:09:04.270 "copy": false, 00:09:04.270 "nvme_iov_md": false 00:09:04.270 }, 00:09:04.270 "driver_specific": { 00:09:04.270 "lvol": { 00:09:04.270 "lvol_store_uuid": "3f3a795e-12f5-4e03-a298-22a28d5a3e9f", 00:09:04.270 "base_bdev": "aio_bdev", 00:09:04.270 "thin_provision": false, 00:09:04.270 "num_allocated_clusters": 38, 00:09:04.270 "snapshot": false, 00:09:04.270 "clone": false, 00:09:04.270 "esnap_clone": false 00:09:04.270 } 00:09:04.270 } 00:09:04.270 } 00:09:04.270 ] 00:09:04.270 03:20:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:04.270 03:20:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f3a795e-12f5-4e03-a298-22a28d5a3e9f 00:09:04.270 03:20:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:04.529 03:20:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:04.529 03:20:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f3a795e-12f5-4e03-a298-22a28d5a3e9f 00:09:04.529 03:20:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:04.529 03:20:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:04.529 03:20:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cbe9196a-ecd6-44ae-ba26-a6fa4fc952f2 00:09:04.789 03:20:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3f3a795e-12f5-4e03-a298-22a28d5a3e9f 00:09:05.048 03:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:05.307 03:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:05.307 00:09:05.307 real 0m18.778s 00:09:05.307 user 0m48.440s 00:09:05.307 sys 0m3.839s 00:09:05.307 03:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.307 03:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:05.307 ************************************ 00:09:05.307 END TEST lvs_grow_dirty 00:09:05.307 ************************************ 00:09:05.307 03:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:05.307 03:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:05.307 03:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:05.307 03:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:05.307 03:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:05.308 03:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:05.308 03:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:05.308 03:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:05.308 03:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:05.308 nvmf_trace.0 00:09:05.308 03:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:05.308 03:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:05.308 03:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:05.308 03:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:05.308 03:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:05.308 03:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:05.308 03:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:05.308 03:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:05.308 rmmod nvme_tcp 00:09:05.308 rmmod nvme_fabrics 00:09:05.308 rmmod nvme_keyring 00:09:05.308 03:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:05.308 03:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:05.308 03:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:05.308 03:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2522150 ']' 00:09:05.308 03:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2522150 00:09:05.308 03:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2522150 ']' 00:09:05.308 03:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2522150 00:09:05.308 03:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:05.308 03:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.308 03:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2522150 00:09:05.567 03:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:05.567 03:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:05.567 03:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2522150' 00:09:05.567 killing process with pid 2522150 00:09:05.567 03:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2522150 00:09:05.567 03:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2522150 00:09:06.503 03:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:06.503 03:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:06.503 03:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:06.503 03:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:06.503 03:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:06.503 03:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:06.503 03:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:06.503 03:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:06.503 03:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:06.503 03:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.503 03:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.503 03:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:09.042 00:09:09.042 real 0m45.470s 00:09:09.042 user 1m11.753s 00:09:09.042 sys 0m9.628s 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:09.042 ************************************ 00:09:09.042 END TEST nvmf_lvs_grow 00:09:09.042 ************************************ 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:09.042 ************************************ 00:09:09.042 START TEST nvmf_bdev_io_wait 00:09:09.042 ************************************ 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:09.042 * Looking for test storage... 00:09:09.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:09.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.042 --rc genhtml_branch_coverage=1 00:09:09.042 --rc genhtml_function_coverage=1 00:09:09.042 --rc genhtml_legend=1 00:09:09.042 --rc geninfo_all_blocks=1 00:09:09.042 --rc geninfo_unexecuted_blocks=1 00:09:09.042 00:09:09.042 ' 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:09.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.042 --rc genhtml_branch_coverage=1 00:09:09.042 --rc genhtml_function_coverage=1 00:09:09.042 --rc genhtml_legend=1 00:09:09.042 --rc geninfo_all_blocks=1 00:09:09.042 --rc geninfo_unexecuted_blocks=1 00:09:09.042 00:09:09.042 ' 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:09.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.042 --rc genhtml_branch_coverage=1 00:09:09.042 --rc genhtml_function_coverage=1 00:09:09.042 --rc genhtml_legend=1 00:09:09.042 --rc geninfo_all_blocks=1 00:09:09.042 --rc geninfo_unexecuted_blocks=1 00:09:09.042 00:09:09.042 ' 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:09.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.042 --rc genhtml_branch_coverage=1 00:09:09.042 --rc genhtml_function_coverage=1 00:09:09.042 --rc genhtml_legend=1 00:09:09.042 --rc geninfo_all_blocks=1 00:09:09.042 --rc geninfo_unexecuted_blocks=1 00:09:09.042 00:09:09.042 ' 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.042 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:09.043 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.043 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:09.043 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:09.043 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:09.043 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.043 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.043 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.043 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:09.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:09.043 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:09.043 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:09.043 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:09.043 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:09.043 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:09.043 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:09.043 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:09.043 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.043 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:09.043 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:09.043 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:09.043 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.043 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.043 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.043 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:09.043 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:09.043 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:09.043 03:20:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.319 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:14.319 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:14.319 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:14.319 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:14.319 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:14.319 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:14.319 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:14.319 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:14.319 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:14.319 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:14.319 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:14.319 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:14.319 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:14.319 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:14.319 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:14.319 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:14.319 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:14.319 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:14.319 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:14.319 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:14.319 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:14.319 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:14.319 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:14.319 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:14.320 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:14.320 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:14.320 Found net devices under 0000:af:00.0: cvl_0_0 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:14.320 Found net devices under 0000:af:00.1: cvl_0_1 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:14.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:14.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:09:14.320 00:09:14.320 --- 10.0.0.2 ping statistics --- 00:09:14.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.320 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:14.320 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:14.320 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:09:14.320 00:09:14.320 --- 10.0.0.1 ping statistics --- 00:09:14.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.320 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:14.320 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.579 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2526364 00:09:14.579 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2526364 00:09:14.579 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:14.579 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2526364 ']' 00:09:14.579 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.579 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.579 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.579 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.579 03:20:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.580 [2024-12-13 03:20:15.617495] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:14.580 [2024-12-13 03:20:15.617585] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.580 [2024-12-13 03:20:15.738159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:14.838 [2024-12-13 03:20:15.849906] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.838 [2024-12-13 03:20:15.849962] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.838 [2024-12-13 03:20:15.849974] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:14.838 [2024-12-13 03:20:15.849985] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:14.838 [2024-12-13 03:20:15.849993] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.838 [2024-12-13 03:20:15.852364] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.838 [2024-12-13 03:20:15.852439] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:14.838 [2024-12-13 03:20:15.852607] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.838 [2024-12-13 03:20:15.852615] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:15.406 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.406 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:15.406 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:15.406 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:15.406 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:15.407 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.407 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:15.407 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.407 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:15.407 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.407 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:15.407 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.407 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:15.666 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.666 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:15.666 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.666 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:15.666 [2024-12-13 03:20:16.681974] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:15.666 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.666 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:15.666 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.666 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:15.666 Malloc0 00:09:15.666 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.666 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:15.666 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.666 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:15.666 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.666 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:15.666 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.666 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:15.666 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.666 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:15.666 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.666 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:15.666 [2024-12-13 03:20:16.779646] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:15.666 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.666 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2526607 00:09:15.666 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:15.666 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:15.666 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2526609 00:09:15.666 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:15.666 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:15.666 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:15.666 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:15.666 { 00:09:15.666 "params": { 00:09:15.666 "name": "Nvme$subsystem", 00:09:15.666 "trtype": "$TEST_TRANSPORT", 00:09:15.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:15.666 "adrfam": "ipv4", 00:09:15.666 "trsvcid": "$NVMF_PORT", 00:09:15.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:15.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:15.666 "hdgst": ${hdgst:-false}, 00:09:15.666 "ddgst": ${ddgst:-false} 00:09:15.666 }, 00:09:15.666 "method": "bdev_nvme_attach_controller" 00:09:15.666 } 00:09:15.666 EOF 00:09:15.666 )") 00:09:15.666 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:15.666 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2526611 00:09:15.666 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:15.666 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:15.666 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:15.666 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:15.666 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:15.666 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:15.666 { 00:09:15.666 "params": { 00:09:15.666 "name": "Nvme$subsystem", 00:09:15.666 "trtype": "$TEST_TRANSPORT", 00:09:15.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:15.666 "adrfam": "ipv4", 00:09:15.666 "trsvcid": "$NVMF_PORT", 00:09:15.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:15.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:15.667 "hdgst": ${hdgst:-false}, 00:09:15.667 "ddgst": ${ddgst:-false} 00:09:15.667 }, 00:09:15.667 "method": "bdev_nvme_attach_controller" 00:09:15.667 } 00:09:15.667 EOF 00:09:15.667 )") 00:09:15.667 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:15.667 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2526614 00:09:15.667 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:15.667 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:15.667 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:15.667 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:15.667 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:15.667 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:15.667 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:15.667 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:15.667 { 00:09:15.667 "params": { 00:09:15.667 "name": "Nvme$subsystem", 00:09:15.667 "trtype": "$TEST_TRANSPORT", 00:09:15.667 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:15.667 "adrfam": "ipv4", 00:09:15.667 "trsvcid": "$NVMF_PORT", 00:09:15.667 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:15.667 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:15.667 "hdgst": ${hdgst:-false}, 00:09:15.667 "ddgst": ${ddgst:-false} 00:09:15.667 }, 00:09:15.667 "method": "bdev_nvme_attach_controller" 00:09:15.667 } 00:09:15.667 EOF 00:09:15.667 )") 00:09:15.667 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:15.667 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:15.667 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:15.667 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:15.667 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:15.667 { 00:09:15.667 "params": { 00:09:15.667 "name": "Nvme$subsystem", 00:09:15.667 "trtype": "$TEST_TRANSPORT", 00:09:15.667 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:15.667 "adrfam": "ipv4", 00:09:15.667 "trsvcid": "$NVMF_PORT", 00:09:15.667 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:15.667 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:15.667 "hdgst": ${hdgst:-false}, 00:09:15.667 "ddgst": ${ddgst:-false} 00:09:15.667 }, 00:09:15.667 "method": "bdev_nvme_attach_controller" 00:09:15.667 } 00:09:15.667 EOF 00:09:15.667 )") 00:09:15.667 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:15.667 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2526607 00:09:15.667 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:15.667 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:15.667 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:15.667 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:15.667 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:15.667 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:15.667 "params": { 00:09:15.667 "name": "Nvme1", 00:09:15.667 "trtype": "tcp", 00:09:15.667 "traddr": "10.0.0.2", 00:09:15.667 "adrfam": "ipv4", 00:09:15.667 "trsvcid": "4420", 00:09:15.667 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:15.667 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:15.667 "hdgst": false, 00:09:15.667 "ddgst": false 00:09:15.667 }, 00:09:15.667 "method": "bdev_nvme_attach_controller" 00:09:15.667 }' 00:09:15.667 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:15.667 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:15.667 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:15.667 "params": { 00:09:15.667 "name": "Nvme1", 00:09:15.667 "trtype": "tcp", 00:09:15.667 "traddr": "10.0.0.2", 00:09:15.667 "adrfam": "ipv4", 00:09:15.667 "trsvcid": "4420", 00:09:15.667 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:15.667 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:15.667 "hdgst": false, 00:09:15.667 "ddgst": false 00:09:15.667 }, 00:09:15.667 "method": "bdev_nvme_attach_controller" 00:09:15.667 }' 00:09:15.667 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:15.667 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:15.667 "params": { 00:09:15.667 "name": "Nvme1", 00:09:15.667 "trtype": "tcp", 00:09:15.667 "traddr": "10.0.0.2", 00:09:15.667 "adrfam": "ipv4", 00:09:15.667 "trsvcid": "4420", 00:09:15.667 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:15.667 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:15.667 "hdgst": false, 00:09:15.667 "ddgst": false 00:09:15.667 }, 00:09:15.667 "method": "bdev_nvme_attach_controller" 00:09:15.667 }' 00:09:15.667 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:15.667 03:20:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:15.667 "params": { 00:09:15.667 "name": "Nvme1", 00:09:15.667 "trtype": "tcp", 00:09:15.667 "traddr": "10.0.0.2", 00:09:15.667 "adrfam": "ipv4", 00:09:15.667 "trsvcid": "4420", 00:09:15.667 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:15.667 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:15.667 "hdgst": false, 00:09:15.667 "ddgst": false 00:09:15.667 }, 00:09:15.667 "method": "bdev_nvme_attach_controller" 00:09:15.667 }' 00:09:15.667 [2024-12-13 03:20:16.859404] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:15.667 [2024-12-13 03:20:16.859410] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:15.667 [2024-12-13 03:20:16.859411] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:15.667 [2024-12-13 03:20:16.859500] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-13 03:20:16.859500] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-13 03:20:16.859501] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:15.667 --proc-type=auto ] 00:09:15.667 --proc-type=auto ] 00:09:15.667 [2024-12-13 03:20:16.861000] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:15.667 [2024-12-13 03:20:16.861084] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:15.926 [2024-12-13 03:20:17.103201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.185 [2024-12-13 03:20:17.198859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.185 [2024-12-13 03:20:17.213402] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:09:16.185 [2024-12-13 03:20:17.302159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.185 [2024-12-13 03:20:17.305703] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:09:16.185 [2024-12-13 03:20:17.347327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.444 [2024-12-13 03:20:17.423243] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:09:16.444 [2024-12-13 03:20:17.454728] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:09:16.444 Running I/O for 1 seconds... 00:09:16.704 Running I/O for 1 seconds... 00:09:16.704 Running I/O for 1 seconds... 00:09:16.963 Running I/O for 1 seconds... 00:09:17.529 7243.00 IOPS, 28.29 MiB/s 00:09:17.529 Latency(us) 00:09:17.529 [2024-12-13T02:20:18.738Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:17.529 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:17.529 Nvme1n1 : 1.02 7244.31 28.30 0.00 0.00 17489.36 5336.50 38947.11 00:09:17.529 [2024-12-13T02:20:18.738Z] =================================================================================================================== 00:09:17.529 [2024-12-13T02:20:18.738Z] Total : 7244.31 28.30 0.00 0.00 17489.36 5336.50 38947.11 00:09:17.787 6577.00 IOPS, 25.69 MiB/s 00:09:17.787 Latency(us) 00:09:17.787 [2024-12-13T02:20:18.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:17.787 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:17.788 Nvme1n1 : 1.01 6658.82 26.01 0.00 0.00 19150.82 6491.18 42692.02 00:09:17.788 [2024-12-13T02:20:18.997Z] =================================================================================================================== 00:09:17.788 [2024-12-13T02:20:18.997Z] Total : 6658.82 26.01 0.00 0.00 19150.82 6491.18 42692.02 00:09:17.788 11116.00 IOPS, 43.42 MiB/s 00:09:17.788 Latency(us) 00:09:17.788 [2024-12-13T02:20:18.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:17.788 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:17.788 Nvme1n1 : 1.01 11188.21 43.70 0.00 0.00 11405.94 4025.78 19223.89 00:09:17.788 [2024-12-13T02:20:18.997Z] =================================================================================================================== 00:09:17.788 [2024-12-13T02:20:18.997Z] Total : 11188.21 43.70 0.00 0.00 11405.94 4025.78 19223.89 00:09:18.047 214816.00 IOPS, 839.12 MiB/s 00:09:18.047 Latency(us) 00:09:18.047 [2024-12-13T02:20:19.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:18.047 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:18.047 Nvme1n1 : 1.00 214462.42 837.74 0.00 0.00 593.77 269.17 1622.80 00:09:18.047 [2024-12-13T02:20:19.256Z] =================================================================================================================== 00:09:18.047 [2024-12-13T02:20:19.256Z] Total : 214462.42 837.74 0.00 0.00 593.77 269.17 1622.80 00:09:18.306 03:20:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2526609 00:09:18.306 03:20:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2526611 00:09:18.564 03:20:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2526614 00:09:18.564 03:20:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:18.564 03:20:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.564 03:20:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:18.564 03:20:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.564 03:20:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:18.564 03:20:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:18.564 03:20:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:18.564 03:20:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:18.564 03:20:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:18.564 03:20:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:18.564 03:20:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:18.564 03:20:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:18.564 rmmod nvme_tcp 00:09:18.564 rmmod nvme_fabrics 00:09:18.824 rmmod nvme_keyring 00:09:18.824 03:20:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:18.824 03:20:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:18.824 03:20:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:18.824 03:20:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2526364 ']' 00:09:18.824 03:20:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2526364 00:09:18.824 03:20:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2526364 ']' 00:09:18.824 03:20:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2526364 00:09:18.824 03:20:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:18.824 03:20:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:18.824 03:20:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2526364 00:09:18.824 03:20:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:18.824 03:20:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:18.824 03:20:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2526364' 00:09:18.824 killing process with pid 2526364 00:09:18.824 03:20:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2526364 00:09:18.824 03:20:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2526364 00:09:19.761 03:20:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:19.761 03:20:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:19.761 03:20:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:19.761 03:20:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:19.761 03:20:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:19.761 03:20:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:19.761 03:20:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:19.761 03:20:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:19.761 03:20:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:19.761 03:20:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.761 03:20:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.761 03:20:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.298 03:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:22.298 00:09:22.298 real 0m13.242s 00:09:22.298 user 0m29.521s 00:09:22.298 sys 0m6.338s 00:09:22.298 03:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.298 03:20:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:22.298 ************************************ 00:09:22.298 END TEST nvmf_bdev_io_wait 00:09:22.298 ************************************ 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:22.298 ************************************ 00:09:22.298 START TEST nvmf_queue_depth 00:09:22.298 ************************************ 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:22.298 * Looking for test storage... 00:09:22.298 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:22.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.298 --rc genhtml_branch_coverage=1 00:09:22.298 --rc genhtml_function_coverage=1 00:09:22.298 --rc genhtml_legend=1 00:09:22.298 --rc geninfo_all_blocks=1 00:09:22.298 --rc geninfo_unexecuted_blocks=1 00:09:22.298 00:09:22.298 ' 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:22.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.298 --rc genhtml_branch_coverage=1 00:09:22.298 --rc genhtml_function_coverage=1 00:09:22.298 --rc genhtml_legend=1 00:09:22.298 --rc geninfo_all_blocks=1 00:09:22.298 --rc geninfo_unexecuted_blocks=1 00:09:22.298 00:09:22.298 ' 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:22.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.298 --rc genhtml_branch_coverage=1 00:09:22.298 --rc genhtml_function_coverage=1 00:09:22.298 --rc genhtml_legend=1 00:09:22.298 --rc geninfo_all_blocks=1 00:09:22.298 --rc geninfo_unexecuted_blocks=1 00:09:22.298 00:09:22.298 ' 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:22.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.298 --rc genhtml_branch_coverage=1 00:09:22.298 --rc genhtml_function_coverage=1 00:09:22.298 --rc genhtml_legend=1 00:09:22.298 --rc geninfo_all_blocks=1 00:09:22.298 --rc geninfo_unexecuted_blocks=1 00:09:22.298 00:09:22.298 ' 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.298 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.299 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.299 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.299 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:22.299 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.299 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:22.299 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:22.299 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:22.299 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.299 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.299 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.299 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:22.299 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:22.299 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:22.299 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:22.299 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:22.299 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:22.299 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:22.299 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:22.299 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:22.299 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:22.299 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.299 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:22.299 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:22.299 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:22.299 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.299 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.299 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.299 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:22.299 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:22.299 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:22.299 03:20:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:27.599 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:27.599 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:27.599 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:27.600 Found net devices under 0000:af:00.0: cvl_0_0 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:27.600 Found net devices under 0000:af:00.1: cvl_0_1 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:27.600 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:27.600 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:09:27.600 00:09:27.600 --- 10.0.0.2 ping statistics --- 00:09:27.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.600 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:27.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:27.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:09:27.600 00:09:27.600 --- 10.0.0.1 ping statistics --- 00:09:27.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.600 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2530773 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2530773 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2530773 ']' 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:27.600 03:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:27.600 [2024-12-13 03:20:28.746128] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:27.600 [2024-12-13 03:20:28.746217] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.860 [2024-12-13 03:20:28.860553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.860 [2024-12-13 03:20:28.965072] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:27.860 [2024-12-13 03:20:28.965119] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:27.860 [2024-12-13 03:20:28.965129] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:27.860 [2024-12-13 03:20:28.965158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:27.860 [2024-12-13 03:20:28.965167] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:27.860 [2024-12-13 03:20:28.966503] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.428 03:20:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:28.428 03:20:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:28.428 03:20:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:28.428 03:20:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:28.428 03:20:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:28.428 03:20:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:28.428 03:20:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:28.428 03:20:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.428 03:20:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:28.428 [2024-12-13 03:20:29.595027] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:28.428 03:20:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.428 03:20:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:28.428 03:20:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.428 03:20:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:28.688 Malloc0 00:09:28.688 03:20:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.688 03:20:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:28.688 03:20:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.688 03:20:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:28.688 03:20:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.688 03:20:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:28.688 03:20:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.688 03:20:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:28.688 03:20:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.688 03:20:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.688 03:20:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.688 03:20:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:28.688 [2024-12-13 03:20:29.705977] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.688 03:20:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.688 03:20:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2531009 00:09:28.688 03:20:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:28.688 03:20:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:28.688 03:20:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2531009 /var/tmp/bdevperf.sock 00:09:28.688 03:20:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2531009 ']' 00:09:28.688 03:20:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:28.688 03:20:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:28.688 03:20:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:28.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:28.688 03:20:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:28.688 03:20:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:28.688 [2024-12-13 03:20:29.782258] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:28.688 [2024-12-13 03:20:29.782338] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2531009 ] 00:09:28.688 [2024-12-13 03:20:29.892506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.947 [2024-12-13 03:20:30.002998] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.515 03:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.515 03:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:29.515 03:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:29.515 03:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.515 03:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:29.515 NVMe0n1 00:09:29.515 03:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.515 03:20:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:29.774 Running I/O for 10 seconds... 00:09:31.649 10240.00 IOPS, 40.00 MiB/s [2024-12-13T02:20:34.236Z] 10541.00 IOPS, 41.18 MiB/s [2024-12-13T02:20:35.174Z] 10583.67 IOPS, 41.34 MiB/s [2024-12-13T02:20:36.111Z] 10733.75 IOPS, 41.93 MiB/s [2024-12-13T02:20:37.048Z] 10744.40 IOPS, 41.97 MiB/s [2024-12-13T02:20:37.986Z] 10753.00 IOPS, 42.00 MiB/s [2024-12-13T02:20:38.923Z] 10805.57 IOPS, 42.21 MiB/s [2024-12-13T02:20:39.861Z] 10811.25 IOPS, 42.23 MiB/s [2024-12-13T02:20:40.907Z] 10802.78 IOPS, 42.20 MiB/s [2024-12-13T02:20:41.166Z] 10814.80 IOPS, 42.25 MiB/s 00:09:39.957 Latency(us) 00:09:39.957 [2024-12-13T02:20:41.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:39.957 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:39.957 Verification LBA range: start 0x0 length 0x4000 00:09:39.957 NVMe0n1 : 10.07 10830.12 42.31 0.00 0.00 94164.45 20721.86 62165.58 00:09:39.957 [2024-12-13T02:20:41.166Z] =================================================================================================================== 00:09:39.957 [2024-12-13T02:20:41.166Z] Total : 10830.12 42.31 0.00 0.00 94164.45 20721.86 62165.58 00:09:39.957 { 00:09:39.957 "results": [ 00:09:39.957 { 00:09:39.957 "job": "NVMe0n1", 00:09:39.957 "core_mask": "0x1", 00:09:39.957 "workload": "verify", 00:09:39.958 "status": "finished", 00:09:39.958 "verify_range": { 00:09:39.958 "start": 0, 00:09:39.958 "length": 16384 00:09:39.958 }, 00:09:39.958 "queue_depth": 1024, 00:09:39.958 "io_size": 4096, 00:09:39.958 "runtime": 10.071083, 00:09:39.958 "iops": 10830.11628441549, 00:09:39.958 "mibps": 42.305141735998006, 00:09:39.958 "io_failed": 0, 00:09:39.958 "io_timeout": 0, 00:09:39.958 "avg_latency_us": 94164.45072054856, 00:09:39.958 "min_latency_us": 20721.859047619047, 00:09:39.958 "max_latency_us": 62165.577142857146 00:09:39.958 } 00:09:39.958 ], 00:09:39.958 "core_count": 1 00:09:39.958 } 00:09:39.958 03:20:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2531009 00:09:39.958 03:20:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2531009 ']' 00:09:39.958 03:20:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2531009 00:09:39.958 03:20:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:39.958 03:20:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:39.958 03:20:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2531009 00:09:39.958 03:20:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:39.958 03:20:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:39.958 03:20:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2531009' 00:09:39.958 killing process with pid 2531009 00:09:39.958 03:20:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2531009 00:09:39.958 Received shutdown signal, test time was about 10.000000 seconds 00:09:39.958 00:09:39.958 Latency(us) 00:09:39.958 [2024-12-13T02:20:41.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:39.958 [2024-12-13T02:20:41.167Z] =================================================================================================================== 00:09:39.958 [2024-12-13T02:20:41.167Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:39.958 03:20:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2531009 00:09:40.895 03:20:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:40.895 03:20:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:40.895 03:20:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:40.895 03:20:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:40.895 03:20:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:40.895 03:20:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:40.895 03:20:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:40.895 03:20:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:40.895 rmmod nvme_tcp 00:09:40.895 rmmod nvme_fabrics 00:09:40.895 rmmod nvme_keyring 00:09:40.895 03:20:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:40.895 03:20:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:40.895 03:20:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:40.895 03:20:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2530773 ']' 00:09:40.895 03:20:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2530773 00:09:40.895 03:20:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2530773 ']' 00:09:40.895 03:20:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2530773 00:09:40.895 03:20:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:40.895 03:20:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.895 03:20:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2530773 00:09:40.895 03:20:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:40.895 03:20:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:40.895 03:20:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2530773' 00:09:40.895 killing process with pid 2530773 00:09:40.895 03:20:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2530773 00:09:40.895 03:20:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2530773 00:09:42.286 03:20:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:42.286 03:20:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:42.286 03:20:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:42.286 03:20:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:42.286 03:20:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:42.286 03:20:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:42.286 03:20:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:42.286 03:20:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:42.286 03:20:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:42.286 03:20:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.286 03:20:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.286 03:20:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.191 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:44.191 00:09:44.191 real 0m22.253s 00:09:44.191 user 0m27.506s 00:09:44.191 sys 0m5.808s 00:09:44.191 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.191 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:44.191 ************************************ 00:09:44.191 END TEST nvmf_queue_depth 00:09:44.191 ************************************ 00:09:44.191 03:20:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:44.191 03:20:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:44.191 03:20:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.191 03:20:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:44.191 ************************************ 00:09:44.191 START TEST nvmf_target_multipath 00:09:44.191 ************************************ 00:09:44.191 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:44.451 * Looking for test storage... 00:09:44.451 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:44.451 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:44.451 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:09:44.451 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:44.451 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:44.451 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.451 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.451 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.451 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.451 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.451 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.451 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.451 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.451 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.451 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.451 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.451 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:44.451 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:44.451 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.451 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.451 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:44.451 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:44.451 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.451 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:44.451 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.451 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:44.451 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:44.451 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.451 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:44.451 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.451 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.451 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.451 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:44.451 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.451 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:44.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.451 --rc genhtml_branch_coverage=1 00:09:44.451 --rc genhtml_function_coverage=1 00:09:44.451 --rc genhtml_legend=1 00:09:44.451 --rc geninfo_all_blocks=1 00:09:44.451 --rc geninfo_unexecuted_blocks=1 00:09:44.451 00:09:44.451 ' 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:44.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.452 --rc genhtml_branch_coverage=1 00:09:44.452 --rc genhtml_function_coverage=1 00:09:44.452 --rc genhtml_legend=1 00:09:44.452 --rc geninfo_all_blocks=1 00:09:44.452 --rc geninfo_unexecuted_blocks=1 00:09:44.452 00:09:44.452 ' 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:44.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.452 --rc genhtml_branch_coverage=1 00:09:44.452 --rc genhtml_function_coverage=1 00:09:44.452 --rc genhtml_legend=1 00:09:44.452 --rc geninfo_all_blocks=1 00:09:44.452 --rc geninfo_unexecuted_blocks=1 00:09:44.452 00:09:44.452 ' 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:44.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.452 --rc genhtml_branch_coverage=1 00:09:44.452 --rc genhtml_function_coverage=1 00:09:44.452 --rc genhtml_legend=1 00:09:44.452 --rc geninfo_all_blocks=1 00:09:44.452 --rc geninfo_unexecuted_blocks=1 00:09:44.452 00:09:44.452 ' 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:44.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:44.452 03:20:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:49.728 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:49.728 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.728 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:49.729 Found net devices under 0000:af:00.0: cvl_0_0 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:49.729 Found net devices under 0000:af:00.1: cvl_0_1 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:49.729 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:49.988 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:49.988 03:20:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:49.988 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:49.988 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:49.988 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:49.988 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:49.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:49.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:09:49.988 00:09:49.988 --- 10.0.0.2 ping statistics --- 00:09:49.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.988 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:09:49.988 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:49.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:49.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:09:49.988 00:09:49.988 --- 10.0.0.1 ping statistics --- 00:09:49.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.988 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:09:49.988 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:49.988 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:49.988 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:49.988 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:49.988 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:49.988 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:49.988 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:49.988 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:49.988 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:49.988 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:49.988 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:49.988 only one NIC for nvmf test 00:09:49.988 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:49.988 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:49.988 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:49.988 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:49.988 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:49.988 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:49.988 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:49.988 rmmod nvme_tcp 00:09:49.988 rmmod nvme_fabrics 00:09:49.988 rmmod nvme_keyring 00:09:49.988 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:49.988 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:49.988 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:49.988 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:49.988 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:49.988 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:49.988 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:49.989 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:49.989 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:49.989 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:49.989 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:49.989 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:49.989 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:49.989 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.989 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:49.989 03:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.524 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:52.524 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:52.524 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:52.524 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:52.524 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:52.524 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:52.524 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:52.524 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:52.524 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:52.524 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:52.524 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:52.524 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:52.524 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:52.524 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:52.524 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:52.524 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:52.524 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:52.524 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:52.524 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:52.524 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:52.524 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:52.524 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:52.524 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.524 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.524 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.524 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:52.524 00:09:52.524 real 0m7.897s 00:09:52.524 user 0m1.761s 00:09:52.524 sys 0m4.151s 00:09:52.524 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.524 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:52.524 ************************************ 00:09:52.524 END TEST nvmf_target_multipath 00:09:52.524 ************************************ 00:09:52.524 03:20:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:52.524 03:20:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:52.524 03:20:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.524 03:20:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:52.524 ************************************ 00:09:52.524 START TEST nvmf_zcopy 00:09:52.525 ************************************ 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:52.525 * Looking for test storage... 00:09:52.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:52.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.525 --rc genhtml_branch_coverage=1 00:09:52.525 --rc genhtml_function_coverage=1 00:09:52.525 --rc genhtml_legend=1 00:09:52.525 --rc geninfo_all_blocks=1 00:09:52.525 --rc geninfo_unexecuted_blocks=1 00:09:52.525 00:09:52.525 ' 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:52.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.525 --rc genhtml_branch_coverage=1 00:09:52.525 --rc genhtml_function_coverage=1 00:09:52.525 --rc genhtml_legend=1 00:09:52.525 --rc geninfo_all_blocks=1 00:09:52.525 --rc geninfo_unexecuted_blocks=1 00:09:52.525 00:09:52.525 ' 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:52.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.525 --rc genhtml_branch_coverage=1 00:09:52.525 --rc genhtml_function_coverage=1 00:09:52.525 --rc genhtml_legend=1 00:09:52.525 --rc geninfo_all_blocks=1 00:09:52.525 --rc geninfo_unexecuted_blocks=1 00:09:52.525 00:09:52.525 ' 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:52.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.525 --rc genhtml_branch_coverage=1 00:09:52.525 --rc genhtml_function_coverage=1 00:09:52.525 --rc genhtml_legend=1 00:09:52.525 --rc geninfo_all_blocks=1 00:09:52.525 --rc geninfo_unexecuted_blocks=1 00:09:52.525 00:09:52.525 ' 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:52.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:52.525 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.526 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.526 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.526 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:52.526 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:52.526 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:52.526 03:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:57.801 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:57.801 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:57.801 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:57.801 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:57.801 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:57.801 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:57.801 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:57.801 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:57.801 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:57.801 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:57.801 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:57.801 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:57.801 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:57.801 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:57.801 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:57.801 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:57.801 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:57.801 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:57.801 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:57.801 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:57.801 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:57.801 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:57.801 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:57.801 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:57.801 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:57.801 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:57.801 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:57.801 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:57.801 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:57.801 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:57.801 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:57.801 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:57.801 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:57.801 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:57.801 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:57.801 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:57.801 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:57.802 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:57.802 Found net devices under 0000:af:00.0: cvl_0_0 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:57.802 Found net devices under 0000:af:00.1: cvl_0_1 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:57.802 03:20:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:57.802 03:20:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:58.061 03:20:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:58.061 03:20:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:58.061 03:20:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:58.061 03:20:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:58.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:58.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:09:58.061 00:09:58.061 --- 10.0.0.2 ping statistics --- 00:09:58.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.061 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:09:58.061 03:20:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:58.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:58.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:09:58.061 00:09:58.061 --- 10.0.0.1 ping statistics --- 00:09:58.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.061 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:09:58.061 03:20:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:58.061 03:20:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:58.061 03:20:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:58.061 03:20:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:58.061 03:20:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:58.061 03:20:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:58.062 03:20:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:58.062 03:20:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:58.062 03:20:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:58.062 03:20:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:58.062 03:20:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:58.062 03:20:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:58.062 03:20:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:58.062 03:20:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2540049 00:09:58.062 03:20:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2540049 00:09:58.062 03:20:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:58.062 03:20:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2540049 ']' 00:09:58.062 03:20:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.062 03:20:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:58.062 03:20:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.062 03:20:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:58.062 03:20:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:58.062 [2024-12-13 03:20:59.220726] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:58.062 [2024-12-13 03:20:59.220813] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.320 [2024-12-13 03:20:59.341737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.320 [2024-12-13 03:20:59.452817] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:58.321 [2024-12-13 03:20:59.452857] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:58.321 [2024-12-13 03:20:59.452867] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:58.321 [2024-12-13 03:20:59.452878] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:58.321 [2024-12-13 03:20:59.452886] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:58.321 [2024-12-13 03:20:59.454268] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:58.888 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:58.888 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:58.888 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:58.888 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:58.888 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:58.888 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:58.888 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:58.888 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:58.888 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.888 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:58.888 [2024-12-13 03:21:00.060234] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:58.888 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.888 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:58.888 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.888 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:58.888 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.888 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:58.888 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.888 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:58.888 [2024-12-13 03:21:00.076418] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:58.888 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.888 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:58.888 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.888 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:58.888 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.888 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:58.888 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.888 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:59.148 malloc0 00:09:59.148 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.148 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:59.148 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.148 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:59.148 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.148 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:59.148 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:59.148 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:59.148 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:59.148 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:59.148 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:59.148 { 00:09:59.148 "params": { 00:09:59.148 "name": "Nvme$subsystem", 00:09:59.148 "trtype": "$TEST_TRANSPORT", 00:09:59.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:59.148 "adrfam": "ipv4", 00:09:59.148 "trsvcid": "$NVMF_PORT", 00:09:59.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:59.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:59.148 "hdgst": ${hdgst:-false}, 00:09:59.148 "ddgst": ${ddgst:-false} 00:09:59.148 }, 00:09:59.148 "method": "bdev_nvme_attach_controller" 00:09:59.148 } 00:09:59.148 EOF 00:09:59.148 )") 00:09:59.148 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:59.148 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:59.148 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:59.148 03:21:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:59.148 "params": { 00:09:59.148 "name": "Nvme1", 00:09:59.148 "trtype": "tcp", 00:09:59.148 "traddr": "10.0.0.2", 00:09:59.148 "adrfam": "ipv4", 00:09:59.148 "trsvcid": "4420", 00:09:59.148 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:59.148 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:59.148 "hdgst": false, 00:09:59.148 "ddgst": false 00:09:59.148 }, 00:09:59.148 "method": "bdev_nvme_attach_controller" 00:09:59.148 }' 00:09:59.148 [2024-12-13 03:21:00.208576] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:59.148 [2024-12-13 03:21:00.208663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2540229 ] 00:09:59.148 [2024-12-13 03:21:00.322735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.407 [2024-12-13 03:21:00.436156] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.975 Running I/O for 10 seconds... 00:10:01.849 7468.00 IOPS, 58.34 MiB/s [2024-12-13T02:21:04.435Z] 7534.50 IOPS, 58.86 MiB/s [2024-12-13T02:21:05.370Z] 7550.00 IOPS, 58.98 MiB/s [2024-12-13T02:21:06.307Z] 7572.50 IOPS, 59.16 MiB/s [2024-12-13T02:21:07.243Z] 7579.00 IOPS, 59.21 MiB/s [2024-12-13T02:21:08.181Z] 7584.33 IOPS, 59.25 MiB/s [2024-12-13T02:21:09.118Z] 7587.29 IOPS, 59.28 MiB/s [2024-12-13T02:21:10.055Z] 7577.75 IOPS, 59.20 MiB/s [2024-12-13T02:21:11.434Z] 7580.33 IOPS, 59.22 MiB/s [2024-12-13T02:21:11.434Z] 7569.90 IOPS, 59.14 MiB/s 00:10:10.225 Latency(us) 00:10:10.225 [2024-12-13T02:21:11.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:10.225 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:10.225 Verification LBA range: start 0x0 length 0x1000 00:10:10.225 Nvme1n1 : 10.01 7573.94 59.17 0.00 0.00 16852.79 2028.50 23967.45 00:10:10.225 [2024-12-13T02:21:11.434Z] =================================================================================================================== 00:10:10.225 [2024-12-13T02:21:11.434Z] Total : 7573.94 59.17 0.00 0.00 16852.79 2028.50 23967.45 00:10:10.793 03:21:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2542745 00:10:10.793 03:21:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:10.793 03:21:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:10.793 03:21:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:10.793 03:21:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:10.793 03:21:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:10.793 03:21:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:10.793 03:21:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:10.793 03:21:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:10.793 { 00:10:10.793 "params": { 00:10:10.793 "name": "Nvme$subsystem", 00:10:10.793 "trtype": "$TEST_TRANSPORT", 00:10:10.793 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:10.793 "adrfam": "ipv4", 00:10:10.793 "trsvcid": "$NVMF_PORT", 00:10:10.793 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:10.793 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:10.793 "hdgst": ${hdgst:-false}, 00:10:10.793 "ddgst": ${ddgst:-false} 00:10:10.793 }, 00:10:10.793 "method": "bdev_nvme_attach_controller" 00:10:10.793 } 00:10:10.793 EOF 00:10:10.793 )") 00:10:10.793 [2024-12-13 03:21:11.951027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.793 [2024-12-13 03:21:11.951065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.793 03:21:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:10.793 03:21:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:10.793 [2024-12-13 03:21:11.959039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.793 [2024-12-13 03:21:11.959066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.793 03:21:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:10.793 03:21:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:10.793 "params": { 00:10:10.793 "name": "Nvme1", 00:10:10.793 "trtype": "tcp", 00:10:10.793 "traddr": "10.0.0.2", 00:10:10.793 "adrfam": "ipv4", 00:10:10.793 "trsvcid": "4420", 00:10:10.793 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:10.793 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:10.793 "hdgst": false, 00:10:10.793 "ddgst": false 00:10:10.793 }, 00:10:10.793 "method": "bdev_nvme_attach_controller" 00:10:10.793 }' 00:10:10.793 [2024-12-13 03:21:11.967009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.793 [2024-12-13 03:21:11.967031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.793 [2024-12-13 03:21:11.975032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.793 [2024-12-13 03:21:11.975053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.793 [2024-12-13 03:21:11.983049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.793 [2024-12-13 03:21:11.983070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.793 [2024-12-13 03:21:11.995072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.793 [2024-12-13 03:21:11.995092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.053 [2024-12-13 03:21:12.003112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.053 [2024-12-13 03:21:12.003133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.053 [2024-12-13 03:21:12.011125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.053 [2024-12-13 03:21:12.011144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.053 [2024-12-13 03:21:12.018433] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:11.053 [2024-12-13 03:21:12.018506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2542745 ] 00:10:11.053 [2024-12-13 03:21:12.019135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.053 [2024-12-13 03:21:12.019154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.053 [2024-12-13 03:21:12.027180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.053 [2024-12-13 03:21:12.027200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.053 [2024-12-13 03:21:12.035182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.053 [2024-12-13 03:21:12.035205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.053 [2024-12-13 03:21:12.043223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.053 [2024-12-13 03:21:12.043243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.053 [2024-12-13 03:21:12.051235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.053 [2024-12-13 03:21:12.051253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.053 [2024-12-13 03:21:12.059258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.053 [2024-12-13 03:21:12.059276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.053 [2024-12-13 03:21:12.067275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.053 [2024-12-13 03:21:12.067293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.053 [2024-12-13 03:21:12.075300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.053 [2024-12-13 03:21:12.075318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.053 [2024-12-13 03:21:12.083315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.053 [2024-12-13 03:21:12.083333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.053 [2024-12-13 03:21:12.091348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.053 [2024-12-13 03:21:12.091366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.053 [2024-12-13 03:21:12.099353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.053 [2024-12-13 03:21:12.099371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.053 [2024-12-13 03:21:12.107387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.053 [2024-12-13 03:21:12.107405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.053 [2024-12-13 03:21:12.115407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.053 [2024-12-13 03:21:12.115425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.053 [2024-12-13 03:21:12.123417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.053 [2024-12-13 03:21:12.123435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.053 [2024-12-13 03:21:12.130896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.053 [2024-12-13 03:21:12.131454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.053 [2024-12-13 03:21:12.131479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.053 [2024-12-13 03:21:12.139475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.053 [2024-12-13 03:21:12.139494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.053 [2024-12-13 03:21:12.147492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.053 [2024-12-13 03:21:12.147513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.053 [2024-12-13 03:21:12.155535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.053 [2024-12-13 03:21:12.155555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.053 [2024-12-13 03:21:12.163535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.053 [2024-12-13 03:21:12.163552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.053 [2024-12-13 03:21:12.171566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.053 [2024-12-13 03:21:12.171584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.053 [2024-12-13 03:21:12.179591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.053 [2024-12-13 03:21:12.179610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.053 [2024-12-13 03:21:12.187593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.053 [2024-12-13 03:21:12.187611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.053 [2024-12-13 03:21:12.195628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.053 [2024-12-13 03:21:12.195646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.053 [2024-12-13 03:21:12.203648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.053 [2024-12-13 03:21:12.203667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.053 [2024-12-13 03:21:12.211657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.053 [2024-12-13 03:21:12.211675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.053 [2024-12-13 03:21:12.219697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.053 [2024-12-13 03:21:12.219716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.053 [2024-12-13 03:21:12.227702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.053 [2024-12-13 03:21:12.227720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.053 [2024-12-13 03:21:12.235738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.053 [2024-12-13 03:21:12.235757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.053 [2024-12-13 03:21:12.243547] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.053 [2024-12-13 03:21:12.243765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.053 [2024-12-13 03:21:12.243784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.053 [2024-12-13 03:21:12.251788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.053 [2024-12-13 03:21:12.251806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.053 [2024-12-13 03:21:12.259817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.053 [2024-12-13 03:21:12.259838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.313 [2024-12-13 03:21:12.267832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.313 [2024-12-13 03:21:12.267852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.313 [2024-12-13 03:21:12.275840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.313 [2024-12-13 03:21:12.275859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.313 [2024-12-13 03:21:12.283873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.313 [2024-12-13 03:21:12.283891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.313 [2024-12-13 03:21:12.291882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.313 [2024-12-13 03:21:12.291900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.313 [2024-12-13 03:21:12.299924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.313 [2024-12-13 03:21:12.299942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.313 [2024-12-13 03:21:12.307951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.313 [2024-12-13 03:21:12.307968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.313 [2024-12-13 03:21:12.315952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.313 [2024-12-13 03:21:12.315970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.313 [2024-12-13 03:21:12.323991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.313 [2024-12-13 03:21:12.324009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.313 [2024-12-13 03:21:12.332012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.313 [2024-12-13 03:21:12.332034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.313 [2024-12-13 03:21:12.340028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.313 [2024-12-13 03:21:12.340049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.313 [2024-12-13 03:21:12.348067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.313 [2024-12-13 03:21:12.348087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.313 [2024-12-13 03:21:12.356053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.313 [2024-12-13 03:21:12.356071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.313 [2024-12-13 03:21:12.364092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.313 [2024-12-13 03:21:12.364110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.313 [2024-12-13 03:21:12.372108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.313 [2024-12-13 03:21:12.372126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.313 [2024-12-13 03:21:12.380115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.313 [2024-12-13 03:21:12.380133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.313 [2024-12-13 03:21:12.388149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.313 [2024-12-13 03:21:12.388166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.313 [2024-12-13 03:21:12.396175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.313 [2024-12-13 03:21:12.396192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.313 [2024-12-13 03:21:12.404178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.313 [2024-12-13 03:21:12.404196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.313 [2024-12-13 03:21:12.412216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.313 [2024-12-13 03:21:12.412233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.313 [2024-12-13 03:21:12.420223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.313 [2024-12-13 03:21:12.420240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.313 [2024-12-13 03:21:12.428263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.313 [2024-12-13 03:21:12.428281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.313 [2024-12-13 03:21:12.436280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.313 [2024-12-13 03:21:12.436299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.313 [2024-12-13 03:21:12.444304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.313 [2024-12-13 03:21:12.444322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.313 [2024-12-13 03:21:12.452328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.313 [2024-12-13 03:21:12.452345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.313 [2024-12-13 03:21:12.460354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.313 [2024-12-13 03:21:12.460372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.313 [2024-12-13 03:21:12.468366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.313 [2024-12-13 03:21:12.468385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.313 [2024-12-13 03:21:12.476411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.313 [2024-12-13 03:21:12.476431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.313 [2024-12-13 03:21:12.484411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.313 [2024-12-13 03:21:12.484434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.313 [2024-12-13 03:21:12.492448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.313 [2024-12-13 03:21:12.492466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.313 [2024-12-13 03:21:12.500467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.313 [2024-12-13 03:21:12.500486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.313 [2024-12-13 03:21:12.508470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.313 [2024-12-13 03:21:12.508488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.313 [2024-12-13 03:21:12.516514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.313 [2024-12-13 03:21:12.516532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.572 [2024-12-13 03:21:12.524532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.572 [2024-12-13 03:21:12.524551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.572 [2024-12-13 03:21:12.532538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.572 [2024-12-13 03:21:12.532556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.572 [2024-12-13 03:21:12.540587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.572 [2024-12-13 03:21:12.540605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.572 [2024-12-13 03:21:12.548582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.572 [2024-12-13 03:21:12.548601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.572 [2024-12-13 03:21:12.556615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.572 [2024-12-13 03:21:12.556632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.572 [2024-12-13 03:21:12.564635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.572 [2024-12-13 03:21:12.564653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.572 [2024-12-13 03:21:12.572645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.572 [2024-12-13 03:21:12.572663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.572 [2024-12-13 03:21:12.580680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.572 [2024-12-13 03:21:12.580698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.572 [2024-12-13 03:21:12.588702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.572 [2024-12-13 03:21:12.588720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.572 [2024-12-13 03:21:12.596743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.572 [2024-12-13 03:21:12.596765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.572 [2024-12-13 03:21:12.604769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.572 [2024-12-13 03:21:12.604789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.572 [2024-12-13 03:21:12.612774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.572 [2024-12-13 03:21:12.612796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.572 [2024-12-13 03:21:12.620814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.572 [2024-12-13 03:21:12.620834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.572 [2024-12-13 03:21:12.628847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.572 [2024-12-13 03:21:12.628868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.572 [2024-12-13 03:21:12.636844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.572 [2024-12-13 03:21:12.636867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.572 [2024-12-13 03:21:12.644885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.572 [2024-12-13 03:21:12.644915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.572 [2024-12-13 03:21:12.652898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.572 [2024-12-13 03:21:12.652925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.572 [2024-12-13 03:21:12.660910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.572 [2024-12-13 03:21:12.660939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.572 [2024-12-13 03:21:12.668947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.573 [2024-12-13 03:21:12.668966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.573 [2024-12-13 03:21:12.676959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.573 [2024-12-13 03:21:12.676979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.573 [2024-12-13 03:21:12.685005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.573 [2024-12-13 03:21:12.685024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.573 [2024-12-13 03:21:12.693032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.573 [2024-12-13 03:21:12.693051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.573 [2024-12-13 03:21:12.701020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.573 [2024-12-13 03:21:12.701039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.573 [2024-12-13 03:21:12.709057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.573 [2024-12-13 03:21:12.709075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.573 [2024-12-13 03:21:12.717081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.573 [2024-12-13 03:21:12.717100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.573 [2024-12-13 03:21:12.725119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.573 [2024-12-13 03:21:12.725138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.573 [2024-12-13 03:21:12.733123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.573 [2024-12-13 03:21:12.733142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.573 [2024-12-13 03:21:12.741127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.573 [2024-12-13 03:21:12.741146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.573 [2024-12-13 03:21:12.749167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.573 [2024-12-13 03:21:12.749185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.573 [2024-12-13 03:21:12.757195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.573 [2024-12-13 03:21:12.757214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.573 [2024-12-13 03:21:12.765197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.573 [2024-12-13 03:21:12.765216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.573 [2024-12-13 03:21:12.773231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.573 [2024-12-13 03:21:12.773251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.832 [2024-12-13 03:21:12.814232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.832 [2024-12-13 03:21:12.814257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.832 [2024-12-13 03:21:12.821364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.832 [2024-12-13 03:21:12.821384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.832 Running I/O for 5 seconds... 00:10:11.832 [2024-12-13 03:21:12.829403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.832 [2024-12-13 03:21:12.829422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.832 [2024-12-13 03:21:12.843329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.833 [2024-12-13 03:21:12.843361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.833 [2024-12-13 03:21:12.852402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.833 [2024-12-13 03:21:12.852425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.833 [2024-12-13 03:21:12.861390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.833 [2024-12-13 03:21:12.861413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.833 [2024-12-13 03:21:12.870050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.833 [2024-12-13 03:21:12.870073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.833 [2024-12-13 03:21:12.878907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.833 [2024-12-13 03:21:12.878940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.833 [2024-12-13 03:21:12.887714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.833 [2024-12-13 03:21:12.887737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.833 [2024-12-13 03:21:12.896658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.833 [2024-12-13 03:21:12.896681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.833 [2024-12-13 03:21:12.905535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.833 [2024-12-13 03:21:12.905558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.833 [2024-12-13 03:21:12.914426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.833 [2024-12-13 03:21:12.914449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.833 [2024-12-13 03:21:12.923165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.833 [2024-12-13 03:21:12.923188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.833 [2024-12-13 03:21:12.932032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.833 [2024-12-13 03:21:12.932056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.833 [2024-12-13 03:21:12.940745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.833 [2024-12-13 03:21:12.940767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.833 [2024-12-13 03:21:12.949724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.833 [2024-12-13 03:21:12.949747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.833 [2024-12-13 03:21:12.958461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.833 [2024-12-13 03:21:12.958485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.833 [2024-12-13 03:21:12.967445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.833 [2024-12-13 03:21:12.967469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.833 [2024-12-13 03:21:12.976242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.833 [2024-12-13 03:21:12.976265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.833 [2024-12-13 03:21:12.985518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.833 [2024-12-13 03:21:12.985543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.833 [2024-12-13 03:21:12.994357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.833 [2024-12-13 03:21:12.994380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.833 [2024-12-13 03:21:13.003190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.833 [2024-12-13 03:21:13.003212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.833 [2024-12-13 03:21:13.011851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.833 [2024-12-13 03:21:13.011872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.833 [2024-12-13 03:21:13.021897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.833 [2024-12-13 03:21:13.021926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.833 [2024-12-13 03:21:13.032002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.833 [2024-12-13 03:21:13.032026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.833 [2024-12-13 03:21:13.039682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.833 [2024-12-13 03:21:13.039703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.093 [2024-12-13 03:21:13.051320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.093 [2024-12-13 03:21:13.051342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.093 [2024-12-13 03:21:13.059884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.093 [2024-12-13 03:21:13.059906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.093 [2024-12-13 03:21:13.070229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.093 [2024-12-13 03:21:13.070252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.093 [2024-12-13 03:21:13.078034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.093 [2024-12-13 03:21:13.078057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.093 [2024-12-13 03:21:13.089617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.093 [2024-12-13 03:21:13.089641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.093 [2024-12-13 03:21:13.098249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.093 [2024-12-13 03:21:13.098271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.093 [2024-12-13 03:21:13.107300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.093 [2024-12-13 03:21:13.107323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.093 [2024-12-13 03:21:13.116077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.093 [2024-12-13 03:21:13.116099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.093 [2024-12-13 03:21:13.125289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.093 [2024-12-13 03:21:13.125312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.093 [2024-12-13 03:21:13.134317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.093 [2024-12-13 03:21:13.134339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.093 [2024-12-13 03:21:13.143508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.093 [2024-12-13 03:21:13.143530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.093 [2024-12-13 03:21:13.152498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.093 [2024-12-13 03:21:13.152520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.093 [2024-12-13 03:21:13.161659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.093 [2024-12-13 03:21:13.161686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.093 [2024-12-13 03:21:13.170807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.093 [2024-12-13 03:21:13.170829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.093 [2024-12-13 03:21:13.179593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.093 [2024-12-13 03:21:13.179615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.093 [2024-12-13 03:21:13.188534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.093 [2024-12-13 03:21:13.188556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.093 [2024-12-13 03:21:13.197693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.093 [2024-12-13 03:21:13.197715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.093 [2024-12-13 03:21:13.206500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.093 [2024-12-13 03:21:13.206522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.093 [2024-12-13 03:21:13.215242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.093 [2024-12-13 03:21:13.215264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.093 [2024-12-13 03:21:13.224159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.093 [2024-12-13 03:21:13.224182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.093 [2024-12-13 03:21:13.233065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.093 [2024-12-13 03:21:13.233088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.093 [2024-12-13 03:21:13.242273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.093 [2024-12-13 03:21:13.242295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.093 [2024-12-13 03:21:13.251292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.093 [2024-12-13 03:21:13.251314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.093 [2024-12-13 03:21:13.260258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.093 [2024-12-13 03:21:13.260281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.093 [2024-12-13 03:21:13.269321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.093 [2024-12-13 03:21:13.269343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.093 [2024-12-13 03:21:13.278037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.093 [2024-12-13 03:21:13.278060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.093 [2024-12-13 03:21:13.287219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.093 [2024-12-13 03:21:13.287242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.093 [2024-12-13 03:21:13.296330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.093 [2024-12-13 03:21:13.296357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.352 [2024-12-13 03:21:13.305417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.352 [2024-12-13 03:21:13.305440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.352 [2024-12-13 03:21:13.314396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.352 [2024-12-13 03:21:13.314418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.352 [2024-12-13 03:21:13.323428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.352 [2024-12-13 03:21:13.323450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.352 [2024-12-13 03:21:13.332463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.353 [2024-12-13 03:21:13.332489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.353 [2024-12-13 03:21:13.341376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.353 [2024-12-13 03:21:13.341399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.353 [2024-12-13 03:21:13.350332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.353 [2024-12-13 03:21:13.350354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.353 [2024-12-13 03:21:13.359015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.353 [2024-12-13 03:21:13.359038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.353 [2024-12-13 03:21:13.367837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.353 [2024-12-13 03:21:13.367859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.353 [2024-12-13 03:21:13.376979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.353 [2024-12-13 03:21:13.377001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.353 [2024-12-13 03:21:13.385899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.353 [2024-12-13 03:21:13.385929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.353 [2024-12-13 03:21:13.394908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.353 [2024-12-13 03:21:13.394939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.353 [2024-12-13 03:21:13.403874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.353 [2024-12-13 03:21:13.403896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.353 [2024-12-13 03:21:13.412985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.353 [2024-12-13 03:21:13.413008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.353 [2024-12-13 03:21:13.421991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.353 [2024-12-13 03:21:13.422014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.353 [2024-12-13 03:21:13.431004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.353 [2024-12-13 03:21:13.431027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.353 [2024-12-13 03:21:13.439638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.353 [2024-12-13 03:21:13.439661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.353 [2024-12-13 03:21:13.448165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.353 [2024-12-13 03:21:13.448188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.353 [2024-12-13 03:21:13.457031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.353 [2024-12-13 03:21:13.457054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.353 [2024-12-13 03:21:13.466119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.353 [2024-12-13 03:21:13.466143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.353 [2024-12-13 03:21:13.474977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.353 [2024-12-13 03:21:13.475000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.353 [2024-12-13 03:21:13.483756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.353 [2024-12-13 03:21:13.483779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.353 [2024-12-13 03:21:13.492782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.353 [2024-12-13 03:21:13.492804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.353 [2024-12-13 03:21:13.501757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.353 [2024-12-13 03:21:13.501783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.353 [2024-12-13 03:21:13.510548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.353 [2024-12-13 03:21:13.510571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.353 [2024-12-13 03:21:13.519312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.353 [2024-12-13 03:21:13.519334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.353 [2024-12-13 03:21:13.528152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.353 [2024-12-13 03:21:13.528175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.353 [2024-12-13 03:21:13.537980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.353 [2024-12-13 03:21:13.538003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.353 [2024-12-13 03:21:13.546003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.353 [2024-12-13 03:21:13.546026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.353 [2024-12-13 03:21:13.557453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.353 [2024-12-13 03:21:13.557476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.612 [2024-12-13 03:21:13.566178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.612 [2024-12-13 03:21:13.566207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.612 [2024-12-13 03:21:13.574869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.612 [2024-12-13 03:21:13.574891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.612 [2024-12-13 03:21:13.583553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.612 [2024-12-13 03:21:13.583576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.612 [2024-12-13 03:21:13.592424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.612 [2024-12-13 03:21:13.592447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.612 [2024-12-13 03:21:13.601487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.612 [2024-12-13 03:21:13.601509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.612 [2024-12-13 03:21:13.609925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.612 [2024-12-13 03:21:13.609948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.612 [2024-12-13 03:21:13.618814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.612 [2024-12-13 03:21:13.618836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.612 [2024-12-13 03:21:13.627601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.612 [2024-12-13 03:21:13.627624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.612 [2024-12-13 03:21:13.637554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.612 [2024-12-13 03:21:13.637576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.612 [2024-12-13 03:21:13.647684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.612 [2024-12-13 03:21:13.647709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.612 [2024-12-13 03:21:13.655478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.612 [2024-12-13 03:21:13.655501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.612 [2024-12-13 03:21:13.666325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.612 [2024-12-13 03:21:13.666348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.612 [2024-12-13 03:21:13.674707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.612 [2024-12-13 03:21:13.674738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.612 [2024-12-13 03:21:13.685060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.612 [2024-12-13 03:21:13.685083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.612 [2024-12-13 03:21:13.695132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.612 [2024-12-13 03:21:13.695154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.612 [2024-12-13 03:21:13.703148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.612 [2024-12-13 03:21:13.703181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.612 [2024-12-13 03:21:13.713745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.612 [2024-12-13 03:21:13.713767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.612 [2024-12-13 03:21:13.722034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.612 [2024-12-13 03:21:13.722056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.612 [2024-12-13 03:21:13.732520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.612 [2024-12-13 03:21:13.732542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.612 [2024-12-13 03:21:13.741070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.612 [2024-12-13 03:21:13.741093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.612 [2024-12-13 03:21:13.751371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.612 [2024-12-13 03:21:13.751394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.612 [2024-12-13 03:21:13.761391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.612 [2024-12-13 03:21:13.761413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.612 [2024-12-13 03:21:13.771039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.612 [2024-12-13 03:21:13.771062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.612 [2024-12-13 03:21:13.778729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.612 [2024-12-13 03:21:13.778752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.612 [2024-12-13 03:21:13.790175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.612 [2024-12-13 03:21:13.790198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.612 [2024-12-13 03:21:13.798789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.612 [2024-12-13 03:21:13.798811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.612 [2024-12-13 03:21:13.807550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.612 [2024-12-13 03:21:13.807572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.612 [2024-12-13 03:21:13.816264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.612 [2024-12-13 03:21:13.816287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.871 [2024-12-13 03:21:13.824988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.871 [2024-12-13 03:21:13.825012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.871 14242.00 IOPS, 111.27 MiB/s [2024-12-13T02:21:14.080Z] [2024-12-13 03:21:13.833831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.871 [2024-12-13 03:21:13.833853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.871 [2024-12-13 03:21:13.842477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.871 [2024-12-13 03:21:13.842499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.871 [2024-12-13 03:21:13.851251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.871 [2024-12-13 03:21:13.851273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.871 [2024-12-13 03:21:13.860351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.871 [2024-12-13 03:21:13.860373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.871 [2024-12-13 03:21:13.869436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.871 [2024-12-13 03:21:13.869458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.871 [2024-12-13 03:21:13.878434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.871 [2024-12-13 03:21:13.878457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.871 [2024-12-13 03:21:13.887383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.871 [2024-12-13 03:21:13.887404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.871 [2024-12-13 03:21:13.896165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.871 [2024-12-13 03:21:13.896187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.871 [2024-12-13 03:21:13.905016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.871 [2024-12-13 03:21:13.905038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.871 [2024-12-13 03:21:13.914226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.871 [2024-12-13 03:21:13.914248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.871 [2024-12-13 03:21:13.922848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.871 [2024-12-13 03:21:13.922869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.871 [2024-12-13 03:21:13.931497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.871 [2024-12-13 03:21:13.931519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.871 [2024-12-13 03:21:13.940265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.871 [2024-12-13 03:21:13.940287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.871 [2024-12-13 03:21:13.948757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.871 [2024-12-13 03:21:13.948779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.871 [2024-12-13 03:21:13.957314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.871 [2024-12-13 03:21:13.957336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.871 [2024-12-13 03:21:13.966050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.871 [2024-12-13 03:21:13.966073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.871 [2024-12-13 03:21:13.975196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.871 [2024-12-13 03:21:13.975219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.871 [2024-12-13 03:21:13.983930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.871 [2024-12-13 03:21:13.983969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.871 [2024-12-13 03:21:13.992721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.871 [2024-12-13 03:21:13.992746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.871 [2024-12-13 03:21:14.001687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.871 [2024-12-13 03:21:14.001711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.871 [2024-12-13 03:21:14.010715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.871 [2024-12-13 03:21:14.010739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.871 [2024-12-13 03:21:14.019393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.871 [2024-12-13 03:21:14.019417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.871 [2024-12-13 03:21:14.028352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.871 [2024-12-13 03:21:14.028374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.871 [2024-12-13 03:21:14.037078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.871 [2024-12-13 03:21:14.037101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.871 [2024-12-13 03:21:14.045910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.871 [2024-12-13 03:21:14.045943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.871 [2024-12-13 03:21:14.054883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.871 [2024-12-13 03:21:14.054907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.871 [2024-12-13 03:21:14.063983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.871 [2024-12-13 03:21:14.064007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.871 [2024-12-13 03:21:14.072939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.871 [2024-12-13 03:21:14.072962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.131 [2024-12-13 03:21:14.081908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.131 [2024-12-13 03:21:14.081939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.131 [2024-12-13 03:21:14.090648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.131 [2024-12-13 03:21:14.090670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.131 [2024-12-13 03:21:14.099561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.131 [2024-12-13 03:21:14.099585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.131 [2024-12-13 03:21:14.108409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.131 [2024-12-13 03:21:14.108432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.131 [2024-12-13 03:21:14.117043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.131 [2024-12-13 03:21:14.117065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.131 [2024-12-13 03:21:14.126018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.131 [2024-12-13 03:21:14.126052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.131 [2024-12-13 03:21:14.134928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.131 [2024-12-13 03:21:14.134951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.131 [2024-12-13 03:21:14.143882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.131 [2024-12-13 03:21:14.143905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.131 [2024-12-13 03:21:14.152822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.131 [2024-12-13 03:21:14.152844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.131 [2024-12-13 03:21:14.161668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.131 [2024-12-13 03:21:14.161691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.131 [2024-12-13 03:21:14.170199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.131 [2024-12-13 03:21:14.170222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.131 [2024-12-13 03:21:14.178730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.131 [2024-12-13 03:21:14.178752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.131 [2024-12-13 03:21:14.187675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.131 [2024-12-13 03:21:14.187698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.131 [2024-12-13 03:21:14.196622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.131 [2024-12-13 03:21:14.196645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.131 [2024-12-13 03:21:14.205397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.131 [2024-12-13 03:21:14.205419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.131 [2024-12-13 03:21:14.213791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.131 [2024-12-13 03:21:14.213814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.131 [2024-12-13 03:21:14.222403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.131 [2024-12-13 03:21:14.222426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.131 [2024-12-13 03:21:14.231188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.131 [2024-12-13 03:21:14.231211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.131 [2024-12-13 03:21:14.240252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.131 [2024-12-13 03:21:14.240276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.131 [2024-12-13 03:21:14.249205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.131 [2024-12-13 03:21:14.249229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.131 [2024-12-13 03:21:14.257881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.131 [2024-12-13 03:21:14.257905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.131 [2024-12-13 03:21:14.266834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.131 [2024-12-13 03:21:14.266858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.131 [2024-12-13 03:21:14.275511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.131 [2024-12-13 03:21:14.275533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.131 [2024-12-13 03:21:14.284376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.131 [2024-12-13 03:21:14.284399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.131 [2024-12-13 03:21:14.293578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.131 [2024-12-13 03:21:14.293602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.131 [2024-12-13 03:21:14.302261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.131 [2024-12-13 03:21:14.302284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.131 [2024-12-13 03:21:14.310957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.131 [2024-12-13 03:21:14.310981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.131 [2024-12-13 03:21:14.319661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.131 [2024-12-13 03:21:14.319684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.131 [2024-12-13 03:21:14.328458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.131 [2024-12-13 03:21:14.328482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.131 [2024-12-13 03:21:14.337179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.132 [2024-12-13 03:21:14.337202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.391 [2024-12-13 03:21:14.346099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.391 [2024-12-13 03:21:14.346126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.391 [2024-12-13 03:21:14.354897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.391 [2024-12-13 03:21:14.354927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.391 [2024-12-13 03:21:14.363525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.391 [2024-12-13 03:21:14.363548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.391 [2024-12-13 03:21:14.372222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.391 [2024-12-13 03:21:14.372245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.391 [2024-12-13 03:21:14.380961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.391 [2024-12-13 03:21:14.380985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.391 [2024-12-13 03:21:14.389816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.391 [2024-12-13 03:21:14.389841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.391 [2024-12-13 03:21:14.398525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.391 [2024-12-13 03:21:14.398548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.391 [2024-12-13 03:21:14.407474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.391 [2024-12-13 03:21:14.407497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.391 [2024-12-13 03:21:14.416314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.391 [2024-12-13 03:21:14.416337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.391 [2024-12-13 03:21:14.425366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.391 [2024-12-13 03:21:14.425389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.391 [2024-12-13 03:21:14.434068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.391 [2024-12-13 03:21:14.434091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.391 [2024-12-13 03:21:14.442720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.391 [2024-12-13 03:21:14.442742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.391 [2024-12-13 03:21:14.451515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.391 [2024-12-13 03:21:14.451536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.391 [2024-12-13 03:21:14.460406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.391 [2024-12-13 03:21:14.460428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.391 [2024-12-13 03:21:14.469139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.391 [2024-12-13 03:21:14.469162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.391 [2024-12-13 03:21:14.478106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.391 [2024-12-13 03:21:14.478129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.391 [2024-12-13 03:21:14.487096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.391 [2024-12-13 03:21:14.487119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.391 [2024-12-13 03:21:14.495883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.391 [2024-12-13 03:21:14.495906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.391 [2024-12-13 03:21:14.504532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.391 [2024-12-13 03:21:14.504555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.391 [2024-12-13 03:21:14.513400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.391 [2024-12-13 03:21:14.513426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.391 [2024-12-13 03:21:14.522401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.391 [2024-12-13 03:21:14.522424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.391 [2024-12-13 03:21:14.531098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.391 [2024-12-13 03:21:14.531121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.391 [2024-12-13 03:21:14.539691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.391 [2024-12-13 03:21:14.539714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.391 [2024-12-13 03:21:14.548577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.391 [2024-12-13 03:21:14.548599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.391 [2024-12-13 03:21:14.557422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.391 [2024-12-13 03:21:14.557445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.391 [2024-12-13 03:21:14.566125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.391 [2024-12-13 03:21:14.566149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.391 [2024-12-13 03:21:14.575211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.391 [2024-12-13 03:21:14.575234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.391 [2024-12-13 03:21:14.583932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.391 [2024-12-13 03:21:14.583955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.391 [2024-12-13 03:21:14.592867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.391 [2024-12-13 03:21:14.592889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.650 [2024-12-13 03:21:14.601756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.650 [2024-12-13 03:21:14.601779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.650 [2024-12-13 03:21:14.610933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.650 [2024-12-13 03:21:14.610956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.650 [2024-12-13 03:21:14.620167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.650 [2024-12-13 03:21:14.620190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.650 [2024-12-13 03:21:14.629287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.650 [2024-12-13 03:21:14.629309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.650 [2024-12-13 03:21:14.638068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.650 [2024-12-13 03:21:14.638090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.650 [2024-12-13 03:21:14.646756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.650 [2024-12-13 03:21:14.646778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.650 [2024-12-13 03:21:14.655624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.650 [2024-12-13 03:21:14.655646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.650 [2024-12-13 03:21:14.664631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.650 [2024-12-13 03:21:14.664653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.650 [2024-12-13 03:21:14.673337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.650 [2024-12-13 03:21:14.673359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.650 [2024-12-13 03:21:14.682588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.650 [2024-12-13 03:21:14.682615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.650 [2024-12-13 03:21:14.691311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.650 [2024-12-13 03:21:14.691333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.650 [2024-12-13 03:21:14.699861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.650 [2024-12-13 03:21:14.699884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.650 [2024-12-13 03:21:14.708595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.650 [2024-12-13 03:21:14.708618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.650 [2024-12-13 03:21:14.717432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.650 [2024-12-13 03:21:14.717455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.651 [2024-12-13 03:21:14.726038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.651 [2024-12-13 03:21:14.726061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.651 [2024-12-13 03:21:14.734902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.651 [2024-12-13 03:21:14.734931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.651 [2024-12-13 03:21:14.743522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.651 [2024-12-13 03:21:14.743544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.651 [2024-12-13 03:21:14.752730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.651 [2024-12-13 03:21:14.752753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.651 [2024-12-13 03:21:14.761822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.651 [2024-12-13 03:21:14.761844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.651 [2024-12-13 03:21:14.770657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.651 [2024-12-13 03:21:14.770679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.651 [2024-12-13 03:21:14.779440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.651 [2024-12-13 03:21:14.779463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.651 [2024-12-13 03:21:14.788141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.651 [2024-12-13 03:21:14.788163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.651 [2024-12-13 03:21:14.796941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.651 [2024-12-13 03:21:14.796964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.651 [2024-12-13 03:21:14.805862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.651 [2024-12-13 03:21:14.805885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.651 [2024-12-13 03:21:14.814654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.651 [2024-12-13 03:21:14.814676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.651 [2024-12-13 03:21:14.823375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.651 [2024-12-13 03:21:14.823396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.651 [2024-12-13 03:21:14.832275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.651 [2024-12-13 03:21:14.832298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.651 14309.00 IOPS, 111.79 MiB/s [2024-12-13T02:21:14.860Z] [2024-12-13 03:21:14.841039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.651 [2024-12-13 03:21:14.841061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.651 [2024-12-13 03:21:14.849891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.651 [2024-12-13 03:21:14.849914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.911 [2024-12-13 03:21:14.858952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.911 [2024-12-13 03:21:14.858977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.911 [2024-12-13 03:21:14.867980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.911 [2024-12-13 03:21:14.868002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.911 [2024-12-13 03:21:14.877162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.911 [2024-12-13 03:21:14.877185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.911 [2024-12-13 03:21:14.885939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.911 [2024-12-13 03:21:14.885962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.911 [2024-12-13 03:21:14.894928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.911 [2024-12-13 03:21:14.894951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.911 [2024-12-13 03:21:14.903975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.911 [2024-12-13 03:21:14.903997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.911 [2024-12-13 03:21:14.912940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.911 [2024-12-13 03:21:14.912963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.911 [2024-12-13 03:21:14.921844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.911 [2024-12-13 03:21:14.921867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.911 [2024-12-13 03:21:14.930782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.911 [2024-12-13 03:21:14.930805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.911 [2024-12-13 03:21:14.939701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.911 [2024-12-13 03:21:14.939723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.911 [2024-12-13 03:21:14.948696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.911 [2024-12-13 03:21:14.948718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.911 [2024-12-13 03:21:14.957607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.911 [2024-12-13 03:21:14.957629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.911 [2024-12-13 03:21:14.966234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.911 [2024-12-13 03:21:14.966257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.911 [2024-12-13 03:21:14.974945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.911 [2024-12-13 03:21:14.974968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.911 [2024-12-13 03:21:14.983582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.911 [2024-12-13 03:21:14.983605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.911 [2024-12-13 03:21:14.992263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.911 [2024-12-13 03:21:14.992286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.911 [2024-12-13 03:21:15.001137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.911 [2024-12-13 03:21:15.001159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.911 [2024-12-13 03:21:15.010163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.911 [2024-12-13 03:21:15.010196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.911 [2024-12-13 03:21:15.019285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.911 [2024-12-13 03:21:15.019307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.911 [2024-12-13 03:21:15.028151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.911 [2024-12-13 03:21:15.028174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.911 [2024-12-13 03:21:15.037050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.911 [2024-12-13 03:21:15.037075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.911 [2024-12-13 03:21:15.046049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.911 [2024-12-13 03:21:15.046072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.911 [2024-12-13 03:21:15.054978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.911 [2024-12-13 03:21:15.055001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.911 [2024-12-13 03:21:15.064029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.911 [2024-12-13 03:21:15.064052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.911 [2024-12-13 03:21:15.072866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.911 [2024-12-13 03:21:15.072890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.911 [2024-12-13 03:21:15.081529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.911 [2024-12-13 03:21:15.081552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.911 [2024-12-13 03:21:15.090560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.911 [2024-12-13 03:21:15.090584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.911 [2024-12-13 03:21:15.099455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.911 [2024-12-13 03:21:15.099479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.911 [2024-12-13 03:21:15.107871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.911 [2024-12-13 03:21:15.107894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.911 [2024-12-13 03:21:15.116586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.911 [2024-12-13 03:21:15.116609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.169 [2024-12-13 03:21:15.125648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.169 [2024-12-13 03:21:15.125672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.169 [2024-12-13 03:21:15.134439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.169 [2024-12-13 03:21:15.134462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.169 [2024-12-13 03:21:15.143279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.169 [2024-12-13 03:21:15.143302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.169 [2024-12-13 03:21:15.152073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.169 [2024-12-13 03:21:15.152096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.169 [2024-12-13 03:21:15.160931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.169 [2024-12-13 03:21:15.160954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.169 [2024-12-13 03:21:15.169611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.169 [2024-12-13 03:21:15.169633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.169 [2024-12-13 03:21:15.178359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.169 [2024-12-13 03:21:15.178382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.169 [2024-12-13 03:21:15.187316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.169 [2024-12-13 03:21:15.187338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.169 [2024-12-13 03:21:15.196192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.169 [2024-12-13 03:21:15.196214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.169 [2024-12-13 03:21:15.205062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.169 [2024-12-13 03:21:15.205085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.169 [2024-12-13 03:21:15.214004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.169 [2024-12-13 03:21:15.214026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.169 [2024-12-13 03:21:15.222907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.169 [2024-12-13 03:21:15.222938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.169 [2024-12-13 03:21:15.231789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.169 [2024-12-13 03:21:15.231812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.170 [2024-12-13 03:21:15.240604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.170 [2024-12-13 03:21:15.240627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.170 [2024-12-13 03:21:15.249766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.170 [2024-12-13 03:21:15.249788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.170 [2024-12-13 03:21:15.258344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.170 [2024-12-13 03:21:15.258367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.170 [2024-12-13 03:21:15.266951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.170 [2024-12-13 03:21:15.266973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.170 [2024-12-13 03:21:15.275974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.170 [2024-12-13 03:21:15.275996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.170 [2024-12-13 03:21:15.284765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.170 [2024-12-13 03:21:15.284787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.170 [2024-12-13 03:21:15.293422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.170 [2024-12-13 03:21:15.293444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.170 [2024-12-13 03:21:15.302243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.170 [2024-12-13 03:21:15.302266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.170 [2024-12-13 03:21:15.311180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.170 [2024-12-13 03:21:15.311202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.170 [2024-12-13 03:21:15.319946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.170 [2024-12-13 03:21:15.319969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.170 [2024-12-13 03:21:15.328534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.170 [2024-12-13 03:21:15.328555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.170 [2024-12-13 03:21:15.337192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.170 [2024-12-13 03:21:15.337216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.170 [2024-12-13 03:21:15.346029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.170 [2024-12-13 03:21:15.346053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.170 [2024-12-13 03:21:15.354707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.170 [2024-12-13 03:21:15.354729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.170 [2024-12-13 03:21:15.363504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.170 [2024-12-13 03:21:15.363528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.170 [2024-12-13 03:21:15.372514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.170 [2024-12-13 03:21:15.372537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.428 [2024-12-13 03:21:15.381459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.428 [2024-12-13 03:21:15.381482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.428 [2024-12-13 03:21:15.390380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.428 [2024-12-13 03:21:15.390402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.428 [2024-12-13 03:21:15.399157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.428 [2024-12-13 03:21:15.399179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.428 [2024-12-13 03:21:15.407778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.428 [2024-12-13 03:21:15.407801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.428 [2024-12-13 03:21:15.416519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.428 [2024-12-13 03:21:15.416543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.428 [2024-12-13 03:21:15.425516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.428 [2024-12-13 03:21:15.425538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.428 [2024-12-13 03:21:15.434239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.428 [2024-12-13 03:21:15.434262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.428 [2024-12-13 03:21:15.443093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.428 [2024-12-13 03:21:15.443116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.428 [2024-12-13 03:21:15.451832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.428 [2024-12-13 03:21:15.451856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.428 [2024-12-13 03:21:15.460759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.428 [2024-12-13 03:21:15.460782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.428 [2024-12-13 03:21:15.469443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.428 [2024-12-13 03:21:15.469466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.428 [2024-12-13 03:21:15.478083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.428 [2024-12-13 03:21:15.478105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.428 [2024-12-13 03:21:15.486774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.428 [2024-12-13 03:21:15.486797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.428 [2024-12-13 03:21:15.495429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.428 [2024-12-13 03:21:15.495452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.428 [2024-12-13 03:21:15.504096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.428 [2024-12-13 03:21:15.504119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.428 [2024-12-13 03:21:15.512980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.428 [2024-12-13 03:21:15.513008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.428 [2024-12-13 03:21:15.521791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.428 [2024-12-13 03:21:15.521813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.428 [2024-12-13 03:21:15.530714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.428 [2024-12-13 03:21:15.530738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.428 [2024-12-13 03:21:15.539821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.428 [2024-12-13 03:21:15.539844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.428 [2024-12-13 03:21:15.548586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.428 [2024-12-13 03:21:15.548610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.428 [2024-12-13 03:21:15.557376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.428 [2024-12-13 03:21:15.557399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.428 [2024-12-13 03:21:15.566325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.428 [2024-12-13 03:21:15.566347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.428 [2024-12-13 03:21:15.574929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.428 [2024-12-13 03:21:15.574968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.428 [2024-12-13 03:21:15.583602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.428 [2024-12-13 03:21:15.583626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.428 [2024-12-13 03:21:15.592506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.428 [2024-12-13 03:21:15.592530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.428 [2024-12-13 03:21:15.601355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.428 [2024-12-13 03:21:15.601378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.428 [2024-12-13 03:21:15.610112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.428 [2024-12-13 03:21:15.610135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.428 [2024-12-13 03:21:15.619031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.428 [2024-12-13 03:21:15.619055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.428 [2024-12-13 03:21:15.627741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.428 [2024-12-13 03:21:15.627764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.687 [2024-12-13 03:21:15.637052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.688 [2024-12-13 03:21:15.637077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.688 [2024-12-13 03:21:15.646004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.688 [2024-12-13 03:21:15.646027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.688 [2024-12-13 03:21:15.656293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.688 [2024-12-13 03:21:15.656316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.688 [2024-12-13 03:21:15.664368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.688 [2024-12-13 03:21:15.664391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.688 [2024-12-13 03:21:15.675751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.688 [2024-12-13 03:21:15.675775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.688 [2024-12-13 03:21:15.685749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.688 [2024-12-13 03:21:15.685776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.688 [2024-12-13 03:21:15.695186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.688 [2024-12-13 03:21:15.695209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.688 [2024-12-13 03:21:15.704940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.688 [2024-12-13 03:21:15.704963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.688 [2024-12-13 03:21:15.712630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.688 [2024-12-13 03:21:15.712651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.688 [2024-12-13 03:21:15.724095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.688 [2024-12-13 03:21:15.724119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.688 [2024-12-13 03:21:15.734034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.688 [2024-12-13 03:21:15.734057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.688 [2024-12-13 03:21:15.743476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.688 [2024-12-13 03:21:15.743499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.688 [2024-12-13 03:21:15.751147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.688 [2024-12-13 03:21:15.751169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.688 [2024-12-13 03:21:15.762427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.688 [2024-12-13 03:21:15.762450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.688 [2024-12-13 03:21:15.770886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.688 [2024-12-13 03:21:15.770908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.688 [2024-12-13 03:21:15.781242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.688 [2024-12-13 03:21:15.781266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.688 [2024-12-13 03:21:15.790011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.688 [2024-12-13 03:21:15.790047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.688 [2024-12-13 03:21:15.799066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.688 [2024-12-13 03:21:15.799089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.688 [2024-12-13 03:21:15.808023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.688 [2024-12-13 03:21:15.808046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.688 [2024-12-13 03:21:15.818251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.688 [2024-12-13 03:21:15.818274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.688 [2024-12-13 03:21:15.827033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.688 [2024-12-13 03:21:15.827056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.688 14348.33 IOPS, 112.10 MiB/s [2024-12-13T02:21:15.897Z] [2024-12-13 03:21:15.838943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.688 [2024-12-13 03:21:15.838965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.688 [2024-12-13 03:21:15.847543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.688 [2024-12-13 03:21:15.847565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.688 [2024-12-13 03:21:15.856590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.688 [2024-12-13 03:21:15.856612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.688 [2024-12-13 03:21:15.865493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.688 [2024-12-13 03:21:15.865520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.688 [2024-12-13 03:21:15.874099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.688 [2024-12-13 03:21:15.874122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.688 [2024-12-13 03:21:15.882865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.688 [2024-12-13 03:21:15.882888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.688 [2024-12-13 03:21:15.891557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.688 [2024-12-13 03:21:15.891579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.947 [2024-12-13 03:21:15.900328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.947 [2024-12-13 03:21:15.900351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.947 [2024-12-13 03:21:15.908884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.947 [2024-12-13 03:21:15.908907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.947 [2024-12-13 03:21:15.917736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.947 [2024-12-13 03:21:15.917759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.947 [2024-12-13 03:21:15.926782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.947 [2024-12-13 03:21:15.926804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.947 [2024-12-13 03:21:15.935816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.947 [2024-12-13 03:21:15.935838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.947 [2024-12-13 03:21:15.944873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.947 [2024-12-13 03:21:15.944896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.947 [2024-12-13 03:21:15.953578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.947 [2024-12-13 03:21:15.953600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.947 [2024-12-13 03:21:15.962417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.947 [2024-12-13 03:21:15.962439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.947 [2024-12-13 03:21:15.971209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.948 [2024-12-13 03:21:15.971231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.948 [2024-12-13 03:21:15.980333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.948 [2024-12-13 03:21:15.980355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.948 [2024-12-13 03:21:15.989335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.948 [2024-12-13 03:21:15.989358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.948 [2024-12-13 03:21:15.997833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.948 [2024-12-13 03:21:15.997856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.948 [2024-12-13 03:21:16.006580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.948 [2024-12-13 03:21:16.006602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.948 [2024-12-13 03:21:16.015275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.948 [2024-12-13 03:21:16.015297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.948 [2024-12-13 03:21:16.024081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.948 [2024-12-13 03:21:16.024103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.948 [2024-12-13 03:21:16.032888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.948 [2024-12-13 03:21:16.032910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.948 [2024-12-13 03:21:16.041721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.948 [2024-12-13 03:21:16.041744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.948 [2024-12-13 03:21:16.050491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.948 [2024-12-13 03:21:16.050514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.948 [2024-12-13 03:21:16.059338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.948 [2024-12-13 03:21:16.059361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.948 [2024-12-13 03:21:16.068550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.948 [2024-12-13 03:21:16.068575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.948 [2024-12-13 03:21:16.077298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.948 [2024-12-13 03:21:16.077323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.948 [2024-12-13 03:21:16.086352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.948 [2024-12-13 03:21:16.086375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.948 [2024-12-13 03:21:16.094884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.948 [2024-12-13 03:21:16.094907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.948 [2024-12-13 03:21:16.103824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.948 [2024-12-13 03:21:16.103846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.948 [2024-12-13 03:21:16.112867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.948 [2024-12-13 03:21:16.112889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.948 [2024-12-13 03:21:16.121631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.948 [2024-12-13 03:21:16.121653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.948 [2024-12-13 03:21:16.130502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.948 [2024-12-13 03:21:16.130525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.948 [2024-12-13 03:21:16.138975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.948 [2024-12-13 03:21:16.138998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.948 [2024-12-13 03:21:16.147588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.948 [2024-12-13 03:21:16.147610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.207 [2024-12-13 03:21:16.156378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.207 [2024-12-13 03:21:16.156403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.207 [2024-12-13 03:21:16.164953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.207 [2024-12-13 03:21:16.164976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.207 [2024-12-13 03:21:16.173701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.207 [2024-12-13 03:21:16.173723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.207 [2024-12-13 03:21:16.182484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.207 [2024-12-13 03:21:16.182507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.207 [2024-12-13 03:21:16.191253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.207 [2024-12-13 03:21:16.191275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.207 [2024-12-13 03:21:16.200277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.207 [2024-12-13 03:21:16.200300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.207 [2024-12-13 03:21:16.209052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.207 [2024-12-13 03:21:16.209074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.207 [2024-12-13 03:21:16.217706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.207 [2024-12-13 03:21:16.217729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.207 [2024-12-13 03:21:16.226549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.207 [2024-12-13 03:21:16.226572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.208 [2024-12-13 03:21:16.235407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.208 [2024-12-13 03:21:16.235431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.208 [2024-12-13 03:21:16.244417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.208 [2024-12-13 03:21:16.244439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.208 [2024-12-13 03:21:16.253037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.208 [2024-12-13 03:21:16.253059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.208 [2024-12-13 03:21:16.261770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.208 [2024-12-13 03:21:16.261793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.208 [2024-12-13 03:21:16.270389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.208 [2024-12-13 03:21:16.270411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.208 [2024-12-13 03:21:16.289224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.208 [2024-12-13 03:21:16.289249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.208 [2024-12-13 03:21:16.300054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.208 [2024-12-13 03:21:16.300078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.208 [2024-12-13 03:21:16.309651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.208 [2024-12-13 03:21:16.309675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.208 [2024-12-13 03:21:16.319154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.208 [2024-12-13 03:21:16.319177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.208 [2024-12-13 03:21:16.326853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.208 [2024-12-13 03:21:16.326874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.208 [2024-12-13 03:21:16.338537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.208 [2024-12-13 03:21:16.338559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.208 [2024-12-13 03:21:16.347088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.208 [2024-12-13 03:21:16.347122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.208 [2024-12-13 03:21:16.357306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.208 [2024-12-13 03:21:16.357329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.208 [2024-12-13 03:21:16.365781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.208 [2024-12-13 03:21:16.365804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.208 [2024-12-13 03:21:16.375981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.208 [2024-12-13 03:21:16.376003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.208 [2024-12-13 03:21:16.385602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.208 [2024-12-13 03:21:16.385625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.208 [2024-12-13 03:21:16.393539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.208 [2024-12-13 03:21:16.393562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.208 [2024-12-13 03:21:16.404535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.208 [2024-12-13 03:21:16.404557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.208 [2024-12-13 03:21:16.413287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.208 [2024-12-13 03:21:16.413310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.466 [2024-12-13 03:21:16.423797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.466 [2024-12-13 03:21:16.423820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.466 [2024-12-13 03:21:16.432086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.466 [2024-12-13 03:21:16.432108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.467 [2024-12-13 03:21:16.443619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.467 [2024-12-13 03:21:16.443643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.467 [2024-12-13 03:21:16.452275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.467 [2024-12-13 03:21:16.452298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.467 [2024-12-13 03:21:16.462758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.467 [2024-12-13 03:21:16.462781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.467 [2024-12-13 03:21:16.471367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.467 [2024-12-13 03:21:16.471390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.467 [2024-12-13 03:21:16.481733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.467 [2024-12-13 03:21:16.481756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.467 [2024-12-13 03:21:16.491679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.467 [2024-12-13 03:21:16.491701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.467 [2024-12-13 03:21:16.499424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.467 [2024-12-13 03:21:16.499445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.467 [2024-12-13 03:21:16.510726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.467 [2024-12-13 03:21:16.510749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.467 [2024-12-13 03:21:16.519191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.467 [2024-12-13 03:21:16.519213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.467 [2024-12-13 03:21:16.529891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.467 [2024-12-13 03:21:16.529914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.467 [2024-12-13 03:21:16.537846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.467 [2024-12-13 03:21:16.537868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.467 [2024-12-13 03:21:16.549491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.467 [2024-12-13 03:21:16.549515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.467 [2024-12-13 03:21:16.557989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.467 [2024-12-13 03:21:16.558013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.467 [2024-12-13 03:21:16.568311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.467 [2024-12-13 03:21:16.568333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.467 [2024-12-13 03:21:16.576331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.467 [2024-12-13 03:21:16.576353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.467 [2024-12-13 03:21:16.587524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.467 [2024-12-13 03:21:16.587547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.467 [2024-12-13 03:21:16.595761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.467 [2024-12-13 03:21:16.595784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.467 [2024-12-13 03:21:16.606006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.467 [2024-12-13 03:21:16.606029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.467 [2024-12-13 03:21:16.613956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.467 [2024-12-13 03:21:16.613978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.467 [2024-12-13 03:21:16.625276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.467 [2024-12-13 03:21:16.625298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.467 [2024-12-13 03:21:16.633887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.467 [2024-12-13 03:21:16.633909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.467 [2024-12-13 03:21:16.644132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.467 [2024-12-13 03:21:16.644155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.467 [2024-12-13 03:21:16.653913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.467 [2024-12-13 03:21:16.653942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.467 [2024-12-13 03:21:16.661930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.467 [2024-12-13 03:21:16.661951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.467 [2024-12-13 03:21:16.672986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.467 [2024-12-13 03:21:16.673010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.726 [2024-12-13 03:21:16.682904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.726 [2024-12-13 03:21:16.682935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.726 [2024-12-13 03:21:16.692509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.726 [2024-12-13 03:21:16.692532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.726 [2024-12-13 03:21:16.700407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.726 [2024-12-13 03:21:16.700429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.726 [2024-12-13 03:21:16.711621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.726 [2024-12-13 03:21:16.711645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.726 [2024-12-13 03:21:16.720232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.726 [2024-12-13 03:21:16.720255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.726 [2024-12-13 03:21:16.729159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.726 [2024-12-13 03:21:16.729185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.726 [2024-12-13 03:21:16.738538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.726 [2024-12-13 03:21:16.738568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.726 [2024-12-13 03:21:16.747355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.726 [2024-12-13 03:21:16.747379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.726 [2024-12-13 03:21:16.756196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.726 [2024-12-13 03:21:16.756220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.726 [2024-12-13 03:21:16.765344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.726 [2024-12-13 03:21:16.765367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.726 [2024-12-13 03:21:16.774206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.726 [2024-12-13 03:21:16.774229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.726 [2024-12-13 03:21:16.783304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.726 [2024-12-13 03:21:16.783327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.726 [2024-12-13 03:21:16.792147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.726 [2024-12-13 03:21:16.792182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.726 [2024-12-13 03:21:16.801248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.726 [2024-12-13 03:21:16.801270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.726 [2024-12-13 03:21:16.810065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.726 [2024-12-13 03:21:16.810088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.726 [2024-12-13 03:21:16.819065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.726 [2024-12-13 03:21:16.819087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.726 [2024-12-13 03:21:16.828065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.726 [2024-12-13 03:21:16.828087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.726 [2024-12-13 03:21:16.837050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.726 [2024-12-13 03:21:16.837074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.726 14352.00 IOPS, 112.12 MiB/s [2024-12-13T02:21:16.935Z] [2024-12-13 03:21:16.845938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.726 [2024-12-13 03:21:16.845976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.726 [2024-12-13 03:21:16.854992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.726 [2024-12-13 03:21:16.855015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.726 [2024-12-13 03:21:16.863949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.726 [2024-12-13 03:21:16.863972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.726 [2024-12-13 03:21:16.872719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.726 [2024-12-13 03:21:16.872742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.726 [2024-12-13 03:21:16.881514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.726 [2024-12-13 03:21:16.881538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.726 [2024-12-13 03:21:16.890359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.726 [2024-12-13 03:21:16.890382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.726 [2024-12-13 03:21:16.899252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.726 [2024-12-13 03:21:16.899276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.726 [2024-12-13 03:21:16.908133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.726 [2024-12-13 03:21:16.908161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.726 [2024-12-13 03:21:16.917194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.726 [2024-12-13 03:21:16.917218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.726 [2024-12-13 03:21:16.925860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.726 [2024-12-13 03:21:16.925884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.986 [2024-12-13 03:21:16.934799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.986 [2024-12-13 03:21:16.934823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.986 [2024-12-13 03:21:16.943487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.986 [2024-12-13 03:21:16.943510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.986 [2024-12-13 03:21:16.952213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.986 [2024-12-13 03:21:16.952236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.986 [2024-12-13 03:21:16.960739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.986 [2024-12-13 03:21:16.960761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.986 [2024-12-13 03:21:16.969619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.986 [2024-12-13 03:21:16.969641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.986 [2024-12-13 03:21:16.978377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.986 [2024-12-13 03:21:16.978399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.986 [2024-12-13 03:21:16.986932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.986 [2024-12-13 03:21:16.986971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.986 [2024-12-13 03:21:16.995769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.986 [2024-12-13 03:21:16.995792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.986 [2024-12-13 03:21:17.004480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.986 [2024-12-13 03:21:17.004503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.986 [2024-12-13 03:21:17.013335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.986 [2024-12-13 03:21:17.013357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.986 [2024-12-13 03:21:17.022265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.986 [2024-12-13 03:21:17.022287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.986 [2024-12-13 03:21:17.031113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.986 [2024-12-13 03:21:17.031135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.986 [2024-12-13 03:21:17.040064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.986 [2024-12-13 03:21:17.040088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.986 [2024-12-13 03:21:17.049101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.986 [2024-12-13 03:21:17.049124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.986 [2024-12-13 03:21:17.058002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.986 [2024-12-13 03:21:17.058024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.986 [2024-12-13 03:21:17.067015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.986 [2024-12-13 03:21:17.067039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.986 [2024-12-13 03:21:17.076338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.986 [2024-12-13 03:21:17.076365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.986 [2024-12-13 03:21:17.085141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.986 [2024-12-13 03:21:17.085164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.986 [2024-12-13 03:21:17.094013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.986 [2024-12-13 03:21:17.094039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.986 [2024-12-13 03:21:17.102739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.986 [2024-12-13 03:21:17.102763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.986 [2024-12-13 03:21:17.111255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.986 [2024-12-13 03:21:17.111279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.986 [2024-12-13 03:21:17.120247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.986 [2024-12-13 03:21:17.120269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.986 [2024-12-13 03:21:17.128999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.986 [2024-12-13 03:21:17.129022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.986 [2024-12-13 03:21:17.137757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.986 [2024-12-13 03:21:17.137780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.986 [2024-12-13 03:21:17.146482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.986 [2024-12-13 03:21:17.146505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.986 [2024-12-13 03:21:17.154982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.986 [2024-12-13 03:21:17.155004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.986 [2024-12-13 03:21:17.163887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.986 [2024-12-13 03:21:17.163911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.986 [2024-12-13 03:21:17.172581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.986 [2024-12-13 03:21:17.172603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.986 [2024-12-13 03:21:17.181472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.986 [2024-12-13 03:21:17.181494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.986 [2024-12-13 03:21:17.190308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.986 [2024-12-13 03:21:17.190332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.246 [2024-12-13 03:21:17.198971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.246 [2024-12-13 03:21:17.198995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.246 [2024-12-13 03:21:17.207844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.246 [2024-12-13 03:21:17.207867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.246 [2024-12-13 03:21:17.216618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.246 [2024-12-13 03:21:17.216641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.246 [2024-12-13 03:21:17.225569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.246 [2024-12-13 03:21:17.225592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.246 [2024-12-13 03:21:17.234380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.246 [2024-12-13 03:21:17.234402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.246 [2024-12-13 03:21:17.243083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.246 [2024-12-13 03:21:17.243105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.246 [2024-12-13 03:21:17.253116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.246 [2024-12-13 03:21:17.253139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.246 [2024-12-13 03:21:17.262887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.246 [2024-12-13 03:21:17.262909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.246 [2024-12-13 03:21:17.270672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.246 [2024-12-13 03:21:17.270694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.246 [2024-12-13 03:21:17.282055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.246 [2024-12-13 03:21:17.282078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.246 [2024-12-13 03:21:17.290403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.246 [2024-12-13 03:21:17.290425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.246 [2024-12-13 03:21:17.300554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.246 [2024-12-13 03:21:17.300577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.246 [2024-12-13 03:21:17.309100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.246 [2024-12-13 03:21:17.309123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.246 [2024-12-13 03:21:17.319435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.246 [2024-12-13 03:21:17.319457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.246 [2024-12-13 03:21:17.329147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.246 [2024-12-13 03:21:17.329170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.246 [2024-12-13 03:21:17.336647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.246 [2024-12-13 03:21:17.336670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.246 [2024-12-13 03:21:17.348010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.246 [2024-12-13 03:21:17.348034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.246 [2024-12-13 03:21:17.356530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.246 [2024-12-13 03:21:17.356553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.246 [2024-12-13 03:21:17.366705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.246 [2024-12-13 03:21:17.366729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.246 [2024-12-13 03:21:17.376629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.246 [2024-12-13 03:21:17.376651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.246 [2024-12-13 03:21:17.386126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.246 [2024-12-13 03:21:17.386148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.246 [2024-12-13 03:21:17.393649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.246 [2024-12-13 03:21:17.393671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.246 [2024-12-13 03:21:17.404911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.246 [2024-12-13 03:21:17.404941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.246 [2024-12-13 03:21:17.413494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.246 [2024-12-13 03:21:17.413516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.246 [2024-12-13 03:21:17.423803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.246 [2024-12-13 03:21:17.423826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.246 [2024-12-13 03:21:17.431654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.246 [2024-12-13 03:21:17.431676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.247 [2024-12-13 03:21:17.442764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.247 [2024-12-13 03:21:17.442787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.247 [2024-12-13 03:21:17.452582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.247 [2024-12-13 03:21:17.452605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.506 [2024-12-13 03:21:17.460295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.506 [2024-12-13 03:21:17.460317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.506 [2024-12-13 03:21:17.471697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.506 [2024-12-13 03:21:17.471719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.506 [2024-12-13 03:21:17.481518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.506 [2024-12-13 03:21:17.481540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.506 [2024-12-13 03:21:17.489728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.506 [2024-12-13 03:21:17.489751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.506 [2024-12-13 03:21:17.500205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.506 [2024-12-13 03:21:17.500228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.506 [2024-12-13 03:21:17.510037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.506 [2024-12-13 03:21:17.510060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.506 [2024-12-13 03:21:17.517860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.506 [2024-12-13 03:21:17.517882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.506 [2024-12-13 03:21:17.528483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.506 [2024-12-13 03:21:17.528505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.506 [2024-12-13 03:21:17.537147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.506 [2024-12-13 03:21:17.537169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.506 [2024-12-13 03:21:17.546006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.506 [2024-12-13 03:21:17.546029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.506 [2024-12-13 03:21:17.555073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.506 [2024-12-13 03:21:17.555095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.506 [2024-12-13 03:21:17.564402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.506 [2024-12-13 03:21:17.564424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.506 [2024-12-13 03:21:17.573391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.506 [2024-12-13 03:21:17.573413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.506 [2024-12-13 03:21:17.581746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.506 [2024-12-13 03:21:17.581768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.506 [2024-12-13 03:21:17.590519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.506 [2024-12-13 03:21:17.590542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.506 [2024-12-13 03:21:17.599333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.506 [2024-12-13 03:21:17.599355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.506 [2024-12-13 03:21:17.608069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.506 [2024-12-13 03:21:17.608091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.506 [2024-12-13 03:21:17.617244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.506 [2024-12-13 03:21:17.617266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.506 [2024-12-13 03:21:17.626056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.506 [2024-12-13 03:21:17.626078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.506 [2024-12-13 03:21:17.635154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.506 [2024-12-13 03:21:17.635177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.506 [2024-12-13 03:21:17.643833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.506 [2024-12-13 03:21:17.643855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.506 [2024-12-13 03:21:17.652611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.506 [2024-12-13 03:21:17.652634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.506 [2024-12-13 03:21:17.661317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.506 [2024-12-13 03:21:17.661339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.506 [2024-12-13 03:21:17.670160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.506 [2024-12-13 03:21:17.670182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.506 [2024-12-13 03:21:17.678951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.506 [2024-12-13 03:21:17.678973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.506 [2024-12-13 03:21:17.687809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.506 [2024-12-13 03:21:17.687831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.507 [2024-12-13 03:21:17.696620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.507 [2024-12-13 03:21:17.696642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.507 [2024-12-13 03:21:17.705055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.507 [2024-12-13 03:21:17.705077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.507 [2024-12-13 03:21:17.713820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.507 [2024-12-13 03:21:17.713842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.766 [2024-12-13 03:21:17.722771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.766 [2024-12-13 03:21:17.722795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.766 [2024-12-13 03:21:17.731679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.766 [2024-12-13 03:21:17.731702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.766 [2024-12-13 03:21:17.740327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.766 [2024-12-13 03:21:17.740348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.766 [2024-12-13 03:21:17.748904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.766 [2024-12-13 03:21:17.748935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.766 [2024-12-13 03:21:17.757606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.766 [2024-12-13 03:21:17.757628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.766 [2024-12-13 03:21:17.766585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.766 [2024-12-13 03:21:17.766608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.766 [2024-12-13 03:21:17.775369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.766 [2024-12-13 03:21:17.775391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.766 [2024-12-13 03:21:17.784322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.766 [2024-12-13 03:21:17.784344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.766 [2024-12-13 03:21:17.793409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.766 [2024-12-13 03:21:17.793432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.766 [2024-12-13 03:21:17.802539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.766 [2024-12-13 03:21:17.802561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.766 [2024-12-13 03:21:17.811070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.766 [2024-12-13 03:21:17.811091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.766 [2024-12-13 03:21:17.819904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.766 [2024-12-13 03:21:17.819934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.766 [2024-12-13 03:21:17.828589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.766 [2024-12-13 03:21:17.828613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.766 [2024-12-13 03:21:17.837468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.766 [2024-12-13 03:21:17.837491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.766 14373.20 IOPS, 112.29 MiB/s [2024-12-13T02:21:17.975Z] [2024-12-13 03:21:17.845592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.766 [2024-12-13 03:21:17.845614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.766 00:10:16.766 Latency(us) 00:10:16.766 [2024-12-13T02:21:17.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:16.766 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:16.766 Nvme1n1 : 5.01 14377.46 112.32 0.00 0.00 8893.14 3900.95 18225.25 00:10:16.766 [2024-12-13T02:21:17.975Z] =================================================================================================================== 00:10:16.766 [2024-12-13T02:21:17.975Z] Total : 14377.46 112.32 0.00 0.00 8893.14 3900.95 18225.25 00:10:16.766 [2024-12-13 03:21:17.851452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.766 [2024-12-13 03:21:17.851472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.766 [2024-12-13 03:21:17.859520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.766 [2024-12-13 03:21:17.859540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.766 [2024-12-13 03:21:17.867508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.766 [2024-12-13 03:21:17.867528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.766 [2024-12-13 03:21:17.875526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.766 [2024-12-13 03:21:17.875544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.766 [2024-12-13 03:21:17.883546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.766 [2024-12-13 03:21:17.883565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.766 [2024-12-13 03:21:17.891559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.766 [2024-12-13 03:21:17.891580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.766 [2024-12-13 03:21:17.899611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.766 [2024-12-13 03:21:17.899634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.766 [2024-12-13 03:21:17.907645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.766 [2024-12-13 03:21:17.907669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.767 [2024-12-13 03:21:17.915631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.767 [2024-12-13 03:21:17.915650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.767 [2024-12-13 03:21:17.923675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.767 [2024-12-13 03:21:17.923693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.767 [2024-12-13 03:21:17.931682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.767 [2024-12-13 03:21:17.931700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.767 [2024-12-13 03:21:17.939689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.767 [2024-12-13 03:21:17.939706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.767 [2024-12-13 03:21:17.947731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.767 [2024-12-13 03:21:17.947750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.767 [2024-12-13 03:21:17.955732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.767 [2024-12-13 03:21:17.955750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.767 [2024-12-13 03:21:17.963766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.767 [2024-12-13 03:21:17.963784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.767 [2024-12-13 03:21:17.971789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.767 [2024-12-13 03:21:17.971808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.025 [2024-12-13 03:21:17.979797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.025 [2024-12-13 03:21:17.979816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.025 [2024-12-13 03:21:17.987828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.025 [2024-12-13 03:21:17.987846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.025 [2024-12-13 03:21:17.995858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.025 [2024-12-13 03:21:17.995877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.025 [2024-12-13 03:21:18.003863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.025 [2024-12-13 03:21:18.003884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.025 [2024-12-13 03:21:18.011900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.025 [2024-12-13 03:21:18.011925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.025 [2024-12-13 03:21:18.019929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.025 [2024-12-13 03:21:18.019948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.025 [2024-12-13 03:21:18.027951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.025 [2024-12-13 03:21:18.027969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.025 [2024-12-13 03:21:18.035973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.025 [2024-12-13 03:21:18.035991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.025 [2024-12-13 03:21:18.043980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.025 [2024-12-13 03:21:18.044002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.026 [2024-12-13 03:21:18.052017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.026 [2024-12-13 03:21:18.052035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.026 [2024-12-13 03:21:18.060030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.026 [2024-12-13 03:21:18.060049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.026 [2024-12-13 03:21:18.068045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.026 [2024-12-13 03:21:18.068063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.026 [2024-12-13 03:21:18.076071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.026 [2024-12-13 03:21:18.076090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.026 [2024-12-13 03:21:18.084083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.026 [2024-12-13 03:21:18.084101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.026 [2024-12-13 03:21:18.092118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.026 [2024-12-13 03:21:18.092138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.026 [2024-12-13 03:21:18.100140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.026 [2024-12-13 03:21:18.100159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.026 [2024-12-13 03:21:18.108147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.026 [2024-12-13 03:21:18.108166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.026 [2024-12-13 03:21:18.116296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.026 [2024-12-13 03:21:18.116316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.026 [2024-12-13 03:21:18.124207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.026 [2024-12-13 03:21:18.124225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.026 [2024-12-13 03:21:18.132211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.026 [2024-12-13 03:21:18.132229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.026 [2024-12-13 03:21:18.140245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.026 [2024-12-13 03:21:18.140262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.026 [2024-12-13 03:21:18.148252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.026 [2024-12-13 03:21:18.148270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.026 [2024-12-13 03:21:18.156292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.026 [2024-12-13 03:21:18.156310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.026 [2024-12-13 03:21:18.164308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.026 [2024-12-13 03:21:18.164326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.026 [2024-12-13 03:21:18.172313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.026 [2024-12-13 03:21:18.172331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.026 [2024-12-13 03:21:18.180350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.026 [2024-12-13 03:21:18.180368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.026 [2024-12-13 03:21:18.188375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.026 [2024-12-13 03:21:18.188393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.026 [2024-12-13 03:21:18.196394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.026 [2024-12-13 03:21:18.196415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.026 [2024-12-13 03:21:18.204433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.026 [2024-12-13 03:21:18.204453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.026 [2024-12-13 03:21:18.212429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.026 [2024-12-13 03:21:18.212448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.026 [2024-12-13 03:21:18.220460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.026 [2024-12-13 03:21:18.220477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.026 [2024-12-13 03:21:18.228481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.026 [2024-12-13 03:21:18.228499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.285 [2024-12-13 03:21:18.236497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.285 [2024-12-13 03:21:18.236515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.285 [2024-12-13 03:21:18.244536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.285 [2024-12-13 03:21:18.244554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.285 [2024-12-13 03:21:18.252549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.285 [2024-12-13 03:21:18.252567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.285 [2024-12-13 03:21:18.260559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.285 [2024-12-13 03:21:18.260579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.285 [2024-12-13 03:21:18.268614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.285 [2024-12-13 03:21:18.268635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.285 [2024-12-13 03:21:18.276614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.285 [2024-12-13 03:21:18.276634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.285 [2024-12-13 03:21:18.284648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.285 [2024-12-13 03:21:18.284667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.285 [2024-12-13 03:21:18.292663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.285 [2024-12-13 03:21:18.292681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.285 [2024-12-13 03:21:18.300677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.285 [2024-12-13 03:21:18.300697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.285 [2024-12-13 03:21:18.308717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.285 [2024-12-13 03:21:18.308736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.285 [2024-12-13 03:21:18.316726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.285 [2024-12-13 03:21:18.316745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.285 [2024-12-13 03:21:18.324735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.285 [2024-12-13 03:21:18.324753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.285 [2024-12-13 03:21:18.332768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.285 [2024-12-13 03:21:18.332786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.285 [2024-12-13 03:21:18.340779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.285 [2024-12-13 03:21:18.340798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.285 [2024-12-13 03:21:18.348818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.285 [2024-12-13 03:21:18.348837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.285 [2024-12-13 03:21:18.356854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.285 [2024-12-13 03:21:18.356873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.285 [2024-12-13 03:21:18.364839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.285 [2024-12-13 03:21:18.364857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.285 [2024-12-13 03:21:18.372884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.285 [2024-12-13 03:21:18.372902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.285 [2024-12-13 03:21:18.380896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.285 [2024-12-13 03:21:18.380914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.285 [2024-12-13 03:21:18.388910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.286 [2024-12-13 03:21:18.388935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.286 [2024-12-13 03:21:18.396965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.286 [2024-12-13 03:21:18.396983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.286 [2024-12-13 03:21:18.404970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.286 [2024-12-13 03:21:18.404988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.286 [2024-12-13 03:21:18.412996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.286 [2024-12-13 03:21:18.413014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.286 [2024-12-13 03:21:18.421044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.286 [2024-12-13 03:21:18.421062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.286 [2024-12-13 03:21:18.429033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.286 [2024-12-13 03:21:18.429051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.286 [2024-12-13 03:21:18.437056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.286 [2024-12-13 03:21:18.437074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.286 [2024-12-13 03:21:18.445087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.286 [2024-12-13 03:21:18.445106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.286 [2024-12-13 03:21:18.453093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.286 [2024-12-13 03:21:18.453111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.286 [2024-12-13 03:21:18.461122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.286 [2024-12-13 03:21:18.461140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.286 [2024-12-13 03:21:18.469133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.286 [2024-12-13 03:21:18.469151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.286 [2024-12-13 03:21:18.477174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.286 [2024-12-13 03:21:18.477192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.286 [2024-12-13 03:21:18.485196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.286 [2024-12-13 03:21:18.485214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.286 [2024-12-13 03:21:18.493197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.286 [2024-12-13 03:21:18.493215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.545 [2024-12-13 03:21:18.501242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.545 [2024-12-13 03:21:18.501261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.545 [2024-12-13 03:21:18.509264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.545 [2024-12-13 03:21:18.509282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.545 [2024-12-13 03:21:18.517263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.545 [2024-12-13 03:21:18.517281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.545 [2024-12-13 03:21:18.525301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.545 [2024-12-13 03:21:18.525319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.545 [2024-12-13 03:21:18.533310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.545 [2024-12-13 03:21:18.533328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.545 [2024-12-13 03:21:18.541354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.545 [2024-12-13 03:21:18.541373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.545 [2024-12-13 03:21:18.549374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.545 [2024-12-13 03:21:18.549392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.545 [2024-12-13 03:21:18.557379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.545 [2024-12-13 03:21:18.557397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.545 [2024-12-13 03:21:18.565411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.545 [2024-12-13 03:21:18.565428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.545 [2024-12-13 03:21:18.573434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.545 [2024-12-13 03:21:18.573451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.545 [2024-12-13 03:21:18.581446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.546 [2024-12-13 03:21:18.581465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.546 [2024-12-13 03:21:18.589478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.546 [2024-12-13 03:21:18.589496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.546 [2024-12-13 03:21:18.597498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.546 [2024-12-13 03:21:18.597516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.546 [2024-12-13 03:21:18.605518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.546 [2024-12-13 03:21:18.605536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.546 [2024-12-13 03:21:18.613544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.546 [2024-12-13 03:21:18.613562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.546 [2024-12-13 03:21:18.621551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.546 [2024-12-13 03:21:18.621568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.546 [2024-12-13 03:21:18.629590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.546 [2024-12-13 03:21:18.629607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.546 [2024-12-13 03:21:18.637614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.546 [2024-12-13 03:21:18.637633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.546 [2024-12-13 03:21:18.645636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.546 [2024-12-13 03:21:18.645655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.546 [2024-12-13 03:21:18.653661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.546 [2024-12-13 03:21:18.653679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.546 [2024-12-13 03:21:18.661666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.546 [2024-12-13 03:21:18.661684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.546 [2024-12-13 03:21:18.669702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.546 [2024-12-13 03:21:18.669719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.546 [2024-12-13 03:21:18.677727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.546 [2024-12-13 03:21:18.677744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.546 [2024-12-13 03:21:18.685734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.546 [2024-12-13 03:21:18.685752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.546 [2024-12-13 03:21:18.693782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.546 [2024-12-13 03:21:18.693800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.546 [2024-12-13 03:21:18.701798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.546 [2024-12-13 03:21:18.701816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.546 [2024-12-13 03:21:18.709800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.546 [2024-12-13 03:21:18.709818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.546 [2024-12-13 03:21:18.717841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.546 [2024-12-13 03:21:18.717859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.546 [2024-12-13 03:21:18.725847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.546 [2024-12-13 03:21:18.725865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.546 [2024-12-13 03:21:18.733885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.546 [2024-12-13 03:21:18.733903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.546 [2024-12-13 03:21:18.741906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.546 [2024-12-13 03:21:18.741937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.546 [2024-12-13 03:21:18.749925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.546 [2024-12-13 03:21:18.749943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2542745) - No such process 00:10:17.805 03:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2542745 00:10:17.805 03:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.805 03:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.805 03:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.805 03:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.805 03:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:17.805 03:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.805 03:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.805 delay0 00:10:17.805 03:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.805 03:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:17.805 03:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.805 03:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.805 03:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.805 03:21:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:17.805 [2024-12-13 03:21:18.930369] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:24.370 Initializing NVMe Controllers 00:10:24.370 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:24.370 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:24.370 Initialization complete. Launching workers. 00:10:24.370 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1114 00:10:24.370 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1401, failed to submit 33 00:10:24.370 success 1229, unsuccessful 172, failed 0 00:10:24.370 03:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:24.370 03:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:24.370 03:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:24.370 03:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:24.370 03:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:24.370 03:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:24.370 03:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:24.370 03:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:24.370 rmmod nvme_tcp 00:10:24.370 rmmod nvme_fabrics 00:10:24.370 rmmod nvme_keyring 00:10:24.370 03:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:24.370 03:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:24.370 03:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:24.370 03:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2540049 ']' 00:10:24.370 03:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2540049 00:10:24.370 03:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2540049 ']' 00:10:24.370 03:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2540049 00:10:24.370 03:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:24.370 03:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:24.370 03:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2540049 00:10:24.370 03:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:24.370 03:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:24.370 03:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2540049' 00:10:24.370 killing process with pid 2540049 00:10:24.370 03:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2540049 00:10:24.370 03:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2540049 00:10:25.307 03:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:25.307 03:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:25.307 03:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:25.307 03:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:25.307 03:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:25.307 03:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:25.307 03:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:25.307 03:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:25.307 03:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:25.307 03:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.307 03:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:25.307 03:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:27.844 00:10:27.844 real 0m35.240s 00:10:27.844 user 0m49.578s 00:10:27.844 sys 0m10.914s 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:27.844 ************************************ 00:10:27.844 END TEST nvmf_zcopy 00:10:27.844 ************************************ 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:27.844 ************************************ 00:10:27.844 START TEST nvmf_nmic 00:10:27.844 ************************************ 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:27.844 * Looking for test storage... 00:10:27.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:27.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.844 --rc genhtml_branch_coverage=1 00:10:27.844 --rc genhtml_function_coverage=1 00:10:27.844 --rc genhtml_legend=1 00:10:27.844 --rc geninfo_all_blocks=1 00:10:27.844 --rc geninfo_unexecuted_blocks=1 00:10:27.844 00:10:27.844 ' 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:27.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.844 --rc genhtml_branch_coverage=1 00:10:27.844 --rc genhtml_function_coverage=1 00:10:27.844 --rc genhtml_legend=1 00:10:27.844 --rc geninfo_all_blocks=1 00:10:27.844 --rc geninfo_unexecuted_blocks=1 00:10:27.844 00:10:27.844 ' 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:27.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.844 --rc genhtml_branch_coverage=1 00:10:27.844 --rc genhtml_function_coverage=1 00:10:27.844 --rc genhtml_legend=1 00:10:27.844 --rc geninfo_all_blocks=1 00:10:27.844 --rc geninfo_unexecuted_blocks=1 00:10:27.844 00:10:27.844 ' 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:27.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.844 --rc genhtml_branch_coverage=1 00:10:27.844 --rc genhtml_function_coverage=1 00:10:27.844 --rc genhtml_legend=1 00:10:27.844 --rc geninfo_all_blocks=1 00:10:27.844 --rc geninfo_unexecuted_blocks=1 00:10:27.844 00:10:27.844 ' 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.844 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:27.845 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.845 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:27.845 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:27.845 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:27.845 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:27.845 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:27.845 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:27.845 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:27.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:27.845 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:27.845 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:27.845 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:27.845 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:27.845 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:27.845 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:27.845 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:27.845 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:27.845 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:27.845 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:27.845 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:27.845 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.845 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:27.845 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.845 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:27.845 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:27.845 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:27.845 03:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:33.122 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:33.122 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:33.122 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:33.122 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:33.122 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:33.122 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:33.122 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:33.122 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:33.122 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:33.122 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:33.122 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:33.122 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:33.122 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:33.122 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:33.122 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:33.122 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:33.122 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:33.122 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:33.122 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:33.122 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:33.122 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:33.122 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:33.123 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:33.123 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:33.123 Found net devices under 0000:af:00.0: cvl_0_0 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:33.123 Found net devices under 0000:af:00.1: cvl_0_1 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:33.123 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:33.383 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:33.383 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:33.383 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:33.383 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:33.383 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:33.383 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:33.383 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:33.383 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:33.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:33.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:10:33.383 00:10:33.383 --- 10.0.0.2 ping statistics --- 00:10:33.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.383 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:10:33.383 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:33.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:33.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:10:33.383 00:10:33.383 --- 10.0.0.1 ping statistics --- 00:10:33.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.383 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:10:33.383 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:33.383 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:33.383 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:33.383 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:33.383 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:33.383 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:33.383 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:33.383 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:33.383 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:33.383 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:33.383 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:33.383 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:33.383 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:33.383 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2548470 00:10:33.383 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:33.383 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2548470 00:10:33.383 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2548470 ']' 00:10:33.383 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.383 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:33.383 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.383 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:33.383 03:21:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:33.676 [2024-12-13 03:21:34.593550] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:33.676 [2024-12-13 03:21:34.593638] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:33.676 [2024-12-13 03:21:34.711696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:33.676 [2024-12-13 03:21:34.819931] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:33.676 [2024-12-13 03:21:34.819980] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:33.676 [2024-12-13 03:21:34.819991] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:33.676 [2024-12-13 03:21:34.820001] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:33.676 [2024-12-13 03:21:34.820009] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:33.676 [2024-12-13 03:21:34.822353] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.676 [2024-12-13 03:21:34.822429] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:33.676 [2024-12-13 03:21:34.822491] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.676 [2024-12-13 03:21:34.822501] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:34.282 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:34.282 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:34.282 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:34.282 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:34.282 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:34.282 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:34.282 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:34.282 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.282 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:34.282 [2024-12-13 03:21:35.451303] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:34.282 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.282 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:34.282 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.282 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:34.567 Malloc0 00:10:34.567 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.567 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:34.567 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.567 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:34.567 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.567 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:34.567 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.567 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:34.567 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.567 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:34.567 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.567 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:34.567 [2024-12-13 03:21:35.561442] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:34.567 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.567 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:34.567 test case1: single bdev can't be used in multiple subsystems 00:10:34.567 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:34.567 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.567 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:34.567 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.567 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:34.567 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.567 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:34.567 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.567 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:34.567 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:34.567 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.567 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:34.567 [2024-12-13 03:21:35.589273] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:34.567 [2024-12-13 03:21:35.589306] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:34.567 [2024-12-13 03:21:35.589318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.567 request: 00:10:34.567 { 00:10:34.567 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:34.567 "namespace": { 00:10:34.567 "bdev_name": "Malloc0", 00:10:34.567 "no_auto_visible": false, 00:10:34.567 "hide_metadata": false 00:10:34.567 }, 00:10:34.567 "method": "nvmf_subsystem_add_ns", 00:10:34.567 "req_id": 1 00:10:34.567 } 00:10:34.567 Got JSON-RPC error response 00:10:34.567 response: 00:10:34.567 { 00:10:34.567 "code": -32602, 00:10:34.567 "message": "Invalid parameters" 00:10:34.567 } 00:10:34.567 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:34.567 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:34.567 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:34.567 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:34.567 Adding namespace failed - expected result. 00:10:34.567 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:34.567 test case2: host connect to nvmf target in multiple paths 00:10:34.567 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:34.567 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.567 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:34.567 [2024-12-13 03:21:35.601417] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:34.567 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.567 03:21:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:35.945 03:21:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:36.882 03:21:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:36.882 03:21:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:36.882 03:21:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:36.882 03:21:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:36.882 03:21:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:38.786 03:21:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:38.786 03:21:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:38.786 03:21:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:38.786 03:21:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:38.786 03:21:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:38.786 03:21:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:38.786 03:21:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:38.786 [global] 00:10:38.786 thread=1 00:10:38.786 invalidate=1 00:10:38.786 rw=write 00:10:38.786 time_based=1 00:10:38.786 runtime=1 00:10:38.786 ioengine=libaio 00:10:38.786 direct=1 00:10:38.786 bs=4096 00:10:38.786 iodepth=1 00:10:38.786 norandommap=0 00:10:38.786 numjobs=1 00:10:38.786 00:10:38.786 verify_dump=1 00:10:38.786 verify_backlog=512 00:10:38.786 verify_state_save=0 00:10:38.786 do_verify=1 00:10:38.786 verify=crc32c-intel 00:10:38.786 [job0] 00:10:38.786 filename=/dev/nvme0n1 00:10:38.786 Could not set queue depth (nvme0n1) 00:10:39.043 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:39.043 fio-3.35 00:10:39.043 Starting 1 thread 00:10:40.421 00:10:40.421 job0: (groupid=0, jobs=1): err= 0: pid=2549661: Fri Dec 13 03:21:41 2024 00:10:40.421 read: IOPS=22, BW=89.1KiB/s (91.2kB/s)(92.0KiB/1033msec) 00:10:40.421 slat (nsec): min=9938, max=24066, avg=22169.52, stdev=2734.66 00:10:40.421 clat (usec): min=40837, max=41097, avg=40964.15, stdev=63.20 00:10:40.421 lat (usec): min=40861, max=41120, avg=40986.31, stdev=63.18 00:10:40.421 clat percentiles (usec): 00:10:40.421 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:40.421 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:40.421 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:40.421 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:40.421 | 99.99th=[41157] 00:10:40.421 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:10:40.421 slat (nsec): min=6698, max=35694, avg=10823.46, stdev=2513.19 00:10:40.421 clat (usec): min=142, max=246, avg=161.33, stdev= 7.75 00:10:40.421 lat (usec): min=149, max=281, avg=172.15, stdev= 8.01 00:10:40.421 clat percentiles (usec): 00:10:40.421 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 157], 00:10:40.421 | 30.00th=[ 159], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 163], 00:10:40.421 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 169], 95.00th=[ 172], 00:10:40.421 | 99.00th=[ 180], 99.50th=[ 184], 99.90th=[ 247], 99.95th=[ 247], 00:10:40.421 | 99.99th=[ 247] 00:10:40.421 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:40.421 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:40.421 lat (usec) : 250=95.70% 00:10:40.421 lat (msec) : 50=4.30% 00:10:40.421 cpu : usr=0.39%, sys=0.87%, ctx=535, majf=0, minf=1 00:10:40.421 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:40.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.421 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.421 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:40.421 00:10:40.421 Run status group 0 (all jobs): 00:10:40.421 READ: bw=89.1KiB/s (91.2kB/s), 89.1KiB/s-89.1KiB/s (91.2kB/s-91.2kB/s), io=92.0KiB (94.2kB), run=1033-1033msec 00:10:40.421 WRITE: bw=1983KiB/s (2030kB/s), 1983KiB/s-1983KiB/s (2030kB/s-2030kB/s), io=2048KiB (2097kB), run=1033-1033msec 00:10:40.421 00:10:40.421 Disk stats (read/write): 00:10:40.421 nvme0n1: ios=69/512, merge=0/0, ticks=886/73, in_queue=959, util=95.89% 00:10:40.421 03:21:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:40.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:40.989 03:21:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:40.989 03:21:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:40.989 03:21:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:40.989 03:21:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:40.989 03:21:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:40.989 03:21:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:40.989 03:21:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:40.989 03:21:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:40.989 03:21:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:40.989 03:21:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:40.989 03:21:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:40.989 03:21:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:40.989 03:21:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:40.989 03:21:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:40.989 03:21:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:40.989 rmmod nvme_tcp 00:10:40.989 rmmod nvme_fabrics 00:10:40.989 rmmod nvme_keyring 00:10:40.989 03:21:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:40.989 03:21:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:40.989 03:21:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:40.989 03:21:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2548470 ']' 00:10:40.989 03:21:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2548470 00:10:40.989 03:21:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2548470 ']' 00:10:40.989 03:21:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2548470 00:10:40.989 03:21:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:40.989 03:21:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:40.989 03:21:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2548470 00:10:40.989 03:21:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:40.989 03:21:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:40.989 03:21:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2548470' 00:10:40.989 killing process with pid 2548470 00:10:40.989 03:21:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2548470 00:10:40.989 03:21:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2548470 00:10:42.374 03:21:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:42.374 03:21:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:42.374 03:21:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:42.374 03:21:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:42.374 03:21:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:42.374 03:21:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:42.374 03:21:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:42.374 03:21:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:42.374 03:21:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:42.374 03:21:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.374 03:21:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:42.374 03:21:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.913 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:44.913 00:10:44.913 real 0m16.855s 00:10:44.913 user 0m40.574s 00:10:44.913 sys 0m5.174s 00:10:44.913 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.913 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.913 ************************************ 00:10:44.913 END TEST nvmf_nmic 00:10:44.913 ************************************ 00:10:44.913 03:21:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:44.913 03:21:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:44.913 03:21:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.913 03:21:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:44.913 ************************************ 00:10:44.913 START TEST nvmf_fio_target 00:10:44.913 ************************************ 00:10:44.913 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:44.913 * Looking for test storage... 00:10:44.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:44.913 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:44.913 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:10:44.913 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:44.913 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:44.913 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:44.913 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:44.913 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:44.913 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.913 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:44.913 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:44.913 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:44.913 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:44.913 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:44.913 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:44.913 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:44.913 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:44.913 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:44.913 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:44.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.914 --rc genhtml_branch_coverage=1 00:10:44.914 --rc genhtml_function_coverage=1 00:10:44.914 --rc genhtml_legend=1 00:10:44.914 --rc geninfo_all_blocks=1 00:10:44.914 --rc geninfo_unexecuted_blocks=1 00:10:44.914 00:10:44.914 ' 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:44.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.914 --rc genhtml_branch_coverage=1 00:10:44.914 --rc genhtml_function_coverage=1 00:10:44.914 --rc genhtml_legend=1 00:10:44.914 --rc geninfo_all_blocks=1 00:10:44.914 --rc geninfo_unexecuted_blocks=1 00:10:44.914 00:10:44.914 ' 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:44.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.914 --rc genhtml_branch_coverage=1 00:10:44.914 --rc genhtml_function_coverage=1 00:10:44.914 --rc genhtml_legend=1 00:10:44.914 --rc geninfo_all_blocks=1 00:10:44.914 --rc geninfo_unexecuted_blocks=1 00:10:44.914 00:10:44.914 ' 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:44.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.914 --rc genhtml_branch_coverage=1 00:10:44.914 --rc genhtml_function_coverage=1 00:10:44.914 --rc genhtml_legend=1 00:10:44.914 --rc geninfo_all_blocks=1 00:10:44.914 --rc geninfo_unexecuted_blocks=1 00:10:44.914 00:10:44.914 ' 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:44.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:44.914 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:44.915 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.915 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.915 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.915 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:44.915 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:44.915 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:44.915 03:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:50.194 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:50.194 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:50.194 Found net devices under 0000:af:00.0: cvl_0_0 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:50.194 Found net devices under 0000:af:00.1: cvl_0_1 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:50.194 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:50.454 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:50.454 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:50.454 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:50.454 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:50.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:50.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:10:50.454 00:10:50.454 --- 10.0.0.2 ping statistics --- 00:10:50.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.454 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:10:50.454 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:50.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:50.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:10:50.454 00:10:50.454 --- 10.0.0.1 ping statistics --- 00:10:50.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.454 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:10:50.454 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:50.454 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:50.454 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:50.454 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:50.454 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:50.454 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:50.454 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:50.454 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:50.454 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:50.454 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:50.454 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:50.454 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:50.454 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.454 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2553659 00:10:50.454 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2553659 00:10:50.454 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2553659 ']' 00:10:50.454 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.454 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.454 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.454 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.454 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.454 03:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:50.454 [2024-12-13 03:21:51.580694] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:50.455 [2024-12-13 03:21:51.580782] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.713 [2024-12-13 03:21:51.697716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:50.713 [2024-12-13 03:21:51.801799] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:50.713 [2024-12-13 03:21:51.801843] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:50.713 [2024-12-13 03:21:51.801853] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:50.713 [2024-12-13 03:21:51.801863] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:50.713 [2024-12-13 03:21:51.801871] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:50.713 [2024-12-13 03:21:51.804245] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.713 [2024-12-13 03:21:51.804316] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:50.713 [2024-12-13 03:21:51.804379] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.713 [2024-12-13 03:21:51.804389] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:51.282 03:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:51.282 03:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:51.282 03:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:51.282 03:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:51.282 03:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.282 03:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:51.282 03:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:51.542 [2024-12-13 03:21:52.598629] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:51.542 03:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:51.801 03:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:51.801 03:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:52.061 03:21:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:52.061 03:21:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:52.321 03:21:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:52.321 03:21:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:52.580 03:21:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:52.580 03:21:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:52.840 03:21:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:53.098 03:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:53.098 03:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:53.357 03:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:53.357 03:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:53.615 03:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:53.615 03:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:53.874 03:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:53.874 03:21:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:53.874 03:21:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:54.132 03:21:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:54.132 03:21:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:54.390 03:21:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:54.650 [2024-12-13 03:21:55.661845] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:54.650 03:21:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:54.909 03:21:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:54.909 03:21:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:56.289 03:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:56.289 03:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:56.289 03:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:56.289 03:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:56.289 03:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:56.289 03:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:58.195 03:21:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:58.195 03:21:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:58.195 03:21:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:58.195 03:21:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:58.195 03:21:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:58.195 03:21:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:58.195 03:21:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:58.195 [global] 00:10:58.195 thread=1 00:10:58.195 invalidate=1 00:10:58.195 rw=write 00:10:58.195 time_based=1 00:10:58.195 runtime=1 00:10:58.195 ioengine=libaio 00:10:58.195 direct=1 00:10:58.195 bs=4096 00:10:58.195 iodepth=1 00:10:58.195 norandommap=0 00:10:58.195 numjobs=1 00:10:58.195 00:10:58.195 verify_dump=1 00:10:58.195 verify_backlog=512 00:10:58.195 verify_state_save=0 00:10:58.195 do_verify=1 00:10:58.195 verify=crc32c-intel 00:10:58.195 [job0] 00:10:58.195 filename=/dev/nvme0n1 00:10:58.195 [job1] 00:10:58.195 filename=/dev/nvme0n2 00:10:58.195 [job2] 00:10:58.195 filename=/dev/nvme0n3 00:10:58.195 [job3] 00:10:58.195 filename=/dev/nvme0n4 00:10:58.195 Could not set queue depth (nvme0n1) 00:10:58.195 Could not set queue depth (nvme0n2) 00:10:58.195 Could not set queue depth (nvme0n3) 00:10:58.195 Could not set queue depth (nvme0n4) 00:10:58.455 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.455 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.455 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.455 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.455 fio-3.35 00:10:58.455 Starting 4 threads 00:10:59.833 00:10:59.833 job0: (groupid=0, jobs=1): err= 0: pid=2555201: Fri Dec 13 03:22:00 2024 00:10:59.833 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:59.833 slat (nsec): min=6817, max=36749, avg=7894.04, stdev=1370.72 00:10:59.833 clat (usec): min=200, max=791, avg=271.55, stdev=52.68 00:10:59.833 lat (usec): min=208, max=799, avg=279.44, stdev=52.67 00:10:59.833 clat percentiles (usec): 00:10:59.833 | 1.00th=[ 212], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 235], 00:10:59.833 | 30.00th=[ 241], 40.00th=[ 249], 50.00th=[ 260], 60.00th=[ 273], 00:10:59.833 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 330], 95.00th=[ 363], 00:10:59.833 | 99.00th=[ 498], 99.50th=[ 519], 99.90th=[ 578], 99.95th=[ 603], 00:10:59.833 | 99.99th=[ 791] 00:10:59.833 write: IOPS=2099, BW=8400KiB/s (8601kB/s)(8408KiB/1001msec); 0 zone resets 00:10:59.833 slat (nsec): min=9987, max=41912, avg=11632.35, stdev=2456.38 00:10:59.833 clat (usec): min=133, max=3249, avg=185.95, stdev=73.78 00:10:59.833 lat (usec): min=143, max=3267, avg=197.58, stdev=74.22 00:10:59.833 clat percentiles (usec): 00:10:59.833 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 167], 00:10:59.833 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:10:59.833 | 70.00th=[ 188], 80.00th=[ 196], 90.00th=[ 223], 95.00th=[ 237], 00:10:59.833 | 99.00th=[ 281], 99.50th=[ 318], 99.90th=[ 429], 99.95th=[ 898], 00:10:59.833 | 99.99th=[ 3261] 00:10:59.833 bw ( KiB/s): min= 8840, max= 8840, per=42.65%, avg=8840.00, stdev= 0.00, samples=1 00:10:59.833 iops : min= 2210, max= 2210, avg=2210.00, stdev= 0.00, samples=1 00:10:59.833 lat (usec) : 250=69.76%, 500=29.73%, 750=0.43%, 1000=0.05% 00:10:59.833 lat (msec) : 4=0.02% 00:10:59.833 cpu : usr=3.70%, sys=6.30%, ctx=4150, majf=0, minf=1 00:10:59.833 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:59.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.833 issued rwts: total=2048,2102,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.833 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:59.833 job1: (groupid=0, jobs=1): err= 0: pid=2555203: Fri Dec 13 03:22:00 2024 00:10:59.833 read: IOPS=24, BW=96.3KiB/s (98.7kB/s)(100KiB/1038msec) 00:10:59.833 slat (nsec): min=4125, max=24664, avg=15333.44, stdev=5746.19 00:10:59.833 clat (usec): min=297, max=41194, avg=37688.29, stdev=11244.89 00:10:59.833 lat (usec): min=309, max=41205, avg=37703.63, stdev=11245.68 00:10:59.833 clat percentiles (usec): 00:10:59.833 | 1.00th=[ 297], 5.00th=[ 359], 10.00th=[40633], 20.00th=[40633], 00:10:59.833 | 30.00th=[40633], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:10:59.833 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:59.833 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:59.833 | 99.99th=[41157] 00:10:59.833 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:10:59.833 slat (nsec): min=3428, max=74600, avg=6666.21, stdev=7359.96 00:10:59.833 clat (usec): min=134, max=353, avg=177.14, stdev=23.77 00:10:59.833 lat (usec): min=138, max=391, avg=183.81, stdev=27.41 00:10:59.833 clat percentiles (usec): 00:10:59.833 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 159], 00:10:59.833 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 180], 00:10:59.833 | 70.00th=[ 184], 80.00th=[ 194], 90.00th=[ 202], 95.00th=[ 217], 00:10:59.833 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 355], 99.95th=[ 355], 00:10:59.833 | 99.99th=[ 355] 00:10:59.833 bw ( KiB/s): min= 4096, max= 4096, per=19.76%, avg=4096.00, stdev= 0.00, samples=1 00:10:59.833 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:59.833 lat (usec) : 250=93.85%, 500=1.86% 00:10:59.833 lat (msec) : 50=4.28% 00:10:59.833 cpu : usr=0.19%, sys=0.48%, ctx=538, majf=0, minf=1 00:10:59.833 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:59.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.833 issued rwts: total=25,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.833 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:59.833 job2: (groupid=0, jobs=1): err= 0: pid=2555204: Fri Dec 13 03:22:00 2024 00:10:59.833 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:59.833 slat (nsec): min=7097, max=32102, avg=8416.04, stdev=1905.21 00:10:59.833 clat (usec): min=214, max=41645, avg=416.72, stdev=2184.05 00:10:59.833 lat (usec): min=222, max=41660, avg=425.13, stdev=2184.67 00:10:59.833 clat percentiles (usec): 00:10:59.833 | 1.00th=[ 229], 5.00th=[ 237], 10.00th=[ 245], 20.00th=[ 255], 00:10:59.833 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 285], 00:10:59.833 | 70.00th=[ 293], 80.00th=[ 314], 90.00th=[ 359], 95.00th=[ 429], 00:10:59.833 | 99.00th=[ 570], 99.50th=[ 644], 99.90th=[41157], 99.95th=[41681], 00:10:59.833 | 99.99th=[41681] 00:10:59.833 write: IOPS=1738, BW=6953KiB/s (7120kB/s)(6960KiB/1001msec); 0 zone resets 00:10:59.833 slat (nsec): min=10064, max=64387, avg=11986.03, stdev=2462.02 00:10:59.833 clat (usec): min=127, max=314, avg=182.61, stdev=20.73 00:10:59.833 lat (usec): min=139, max=345, avg=194.59, stdev=21.37 00:10:59.833 clat percentiles (usec): 00:10:59.833 | 1.00th=[ 141], 5.00th=[ 155], 10.00th=[ 163], 20.00th=[ 169], 00:10:59.833 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 182], 00:10:59.833 | 70.00th=[ 188], 80.00th=[ 196], 90.00th=[ 208], 95.00th=[ 221], 00:10:59.833 | 99.00th=[ 253], 99.50th=[ 269], 99.90th=[ 293], 99.95th=[ 314], 00:10:59.833 | 99.99th=[ 314] 00:10:59.833 bw ( KiB/s): min= 5832, max= 5832, per=28.14%, avg=5832.00, stdev= 0.00, samples=1 00:10:59.833 iops : min= 1458, max= 1458, avg=1458.00, stdev= 0.00, samples=1 00:10:59.833 lat (usec) : 250=59.22%, 500=39.96%, 750=0.67% 00:10:59.833 lat (msec) : 50=0.15% 00:10:59.833 cpu : usr=1.70%, sys=3.60%, ctx=3278, majf=0, minf=1 00:10:59.833 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:59.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.833 issued rwts: total=1536,1740,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.833 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:59.833 job3: (groupid=0, jobs=1): err= 0: pid=2555205: Fri Dec 13 03:22:00 2024 00:10:59.833 read: IOPS=519, BW=2079KiB/s (2129kB/s)(2112KiB/1016msec) 00:10:59.833 slat (nsec): min=6922, max=38917, avg=8678.75, stdev=2675.83 00:10:59.833 clat (usec): min=217, max=41918, avg=1487.38, stdev=6993.10 00:10:59.833 lat (usec): min=226, max=41931, avg=1496.06, stdev=6995.02 00:10:59.833 clat percentiles (usec): 00:10:59.833 | 1.00th=[ 223], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 237], 00:10:59.833 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 255], 00:10:59.833 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 289], 00:10:59.833 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:10:59.833 | 99.99th=[41681] 00:10:59.833 write: IOPS=1007, BW=4031KiB/s (4128kB/s)(4096KiB/1016msec); 0 zone resets 00:10:59.833 slat (usec): min=7, max=846, avg=12.74, stdev=26.17 00:10:59.833 clat (usec): min=150, max=3206, avg=204.09, stdev=100.72 00:10:59.833 lat (usec): min=161, max=3217, avg=216.83, stdev=104.27 00:10:59.833 clat percentiles (usec): 00:10:59.833 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 176], 00:10:59.833 | 30.00th=[ 180], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 202], 00:10:59.833 | 70.00th=[ 217], 80.00th=[ 229], 90.00th=[ 241], 95.00th=[ 249], 00:10:59.833 | 99.00th=[ 351], 99.50th=[ 392], 99.90th=[ 635], 99.95th=[ 3195], 00:10:59.833 | 99.99th=[ 3195] 00:10:59.833 bw ( KiB/s): min= 8192, max= 8192, per=39.53%, avg=8192.00, stdev= 0.00, samples=1 00:10:59.833 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:59.833 lat (usec) : 250=79.06%, 500=19.72%, 750=0.13% 00:10:59.833 lat (msec) : 4=0.06%, 50=1.03% 00:10:59.833 cpu : usr=0.69%, sys=1.67%, ctx=1554, majf=0, minf=1 00:10:59.833 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:59.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.833 issued rwts: total=528,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.833 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:59.833 00:10:59.833 Run status group 0 (all jobs): 00:10:59.834 READ: bw=15.6MiB/s (16.3MB/s), 96.3KiB/s-8184KiB/s (98.7kB/s-8380kB/s), io=16.2MiB (16.9MB), run=1001-1038msec 00:10:59.834 WRITE: bw=20.2MiB/s (21.2MB/s), 1973KiB/s-8400KiB/s (2020kB/s-8601kB/s), io=21.0MiB (22.0MB), run=1001-1038msec 00:10:59.834 00:10:59.834 Disk stats (read/write): 00:10:59.834 nvme0n1: ios=1692/2048, merge=0/0, ticks=576/364, in_queue=940, util=91.68% 00:10:59.834 nvme0n2: ios=70/512, merge=0/0, ticks=1297/85, in_queue=1382, util=98.68% 00:10:59.834 nvme0n3: ios=1209/1536, merge=0/0, ticks=1500/272, in_queue=1772, util=98.75% 00:10:59.834 nvme0n4: ios=583/1024, merge=0/0, ticks=799/202, in_queue=1001, util=98.85% 00:10:59.834 03:22:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:59.834 [global] 00:10:59.834 thread=1 00:10:59.834 invalidate=1 00:10:59.834 rw=randwrite 00:10:59.834 time_based=1 00:10:59.834 runtime=1 00:10:59.834 ioengine=libaio 00:10:59.834 direct=1 00:10:59.834 bs=4096 00:10:59.834 iodepth=1 00:10:59.834 norandommap=0 00:10:59.834 numjobs=1 00:10:59.834 00:10:59.834 verify_dump=1 00:10:59.834 verify_backlog=512 00:10:59.834 verify_state_save=0 00:10:59.834 do_verify=1 00:10:59.834 verify=crc32c-intel 00:10:59.834 [job0] 00:10:59.834 filename=/dev/nvme0n1 00:10:59.834 [job1] 00:10:59.834 filename=/dev/nvme0n2 00:10:59.834 [job2] 00:10:59.834 filename=/dev/nvme0n3 00:10:59.834 [job3] 00:10:59.834 filename=/dev/nvme0n4 00:10:59.834 Could not set queue depth (nvme0n1) 00:10:59.834 Could not set queue depth (nvme0n2) 00:10:59.834 Could not set queue depth (nvme0n3) 00:10:59.834 Could not set queue depth (nvme0n4) 00:11:00.092 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:00.092 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:00.092 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:00.092 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:00.092 fio-3.35 00:11:00.092 Starting 4 threads 00:11:01.489 00:11:01.489 job0: (groupid=0, jobs=1): err= 0: pid=2555565: Fri Dec 13 03:22:02 2024 00:11:01.489 read: IOPS=1075, BW=4303KiB/s (4406kB/s)(4436KiB/1031msec) 00:11:01.489 slat (nsec): min=6842, max=31521, avg=7909.20, stdev=1791.07 00:11:01.489 clat (usec): min=215, max=41172, avg=596.16, stdev=3022.64 00:11:01.489 lat (usec): min=223, max=41190, avg=604.07, stdev=3023.55 00:11:01.489 clat percentiles (usec): 00:11:01.489 | 1.00th=[ 229], 5.00th=[ 249], 10.00th=[ 273], 20.00th=[ 306], 00:11:01.489 | 30.00th=[ 318], 40.00th=[ 330], 50.00th=[ 371], 60.00th=[ 388], 00:11:01.489 | 70.00th=[ 408], 80.00th=[ 424], 90.00th=[ 445], 95.00th=[ 465], 00:11:01.489 | 99.00th=[ 510], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:11:01.489 | 99.99th=[41157] 00:11:01.489 write: IOPS=1489, BW=5959KiB/s (6102kB/s)(6144KiB/1031msec); 0 zone resets 00:11:01.489 slat (usec): min=9, max=30426, avg=30.49, stdev=776.09 00:11:01.489 clat (usec): min=129, max=468, avg=200.23, stdev=28.95 00:11:01.489 lat (usec): min=140, max=30733, avg=230.72, stdev=779.35 00:11:01.489 clat percentiles (usec): 00:11:01.489 | 1.00th=[ 145], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 174], 00:11:01.489 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 202], 60.00th=[ 208], 00:11:01.489 | 70.00th=[ 217], 80.00th=[ 225], 90.00th=[ 235], 95.00th=[ 241], 00:11:01.489 | 99.00th=[ 269], 99.50th=[ 285], 99.90th=[ 400], 99.95th=[ 469], 00:11:01.489 | 99.99th=[ 469] 00:11:01.489 bw ( KiB/s): min= 4096, max= 8192, per=25.85%, avg=6144.00, stdev=2896.31, samples=2 00:11:01.489 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:11:01.489 lat (usec) : 250=59.09%, 500=40.38%, 750=0.26% 00:11:01.489 lat (msec) : 20=0.04%, 50=0.23% 00:11:01.489 cpu : usr=0.87%, sys=2.91%, ctx=2648, majf=0, minf=1 00:11:01.489 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:01.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.489 issued rwts: total=1109,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.489 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:01.489 job1: (groupid=0, jobs=1): err= 0: pid=2555566: Fri Dec 13 03:22:02 2024 00:11:01.489 read: IOPS=21, BW=87.7KiB/s (89.8kB/s)(88.0KiB/1003msec) 00:11:01.490 slat (nsec): min=9433, max=23819, avg=21597.27, stdev=2818.63 00:11:01.490 clat (usec): min=40765, max=41084, avg=40952.53, stdev=63.40 00:11:01.490 lat (usec): min=40774, max=41107, avg=40974.13, stdev=65.13 00:11:01.490 clat percentiles (usec): 00:11:01.490 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:11:01.490 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:01.490 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:01.490 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:01.490 | 99.99th=[41157] 00:11:01.490 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:11:01.490 slat (nsec): min=9654, max=39292, avg=10877.44, stdev=2121.14 00:11:01.490 clat (usec): min=160, max=467, avg=184.80, stdev=19.30 00:11:01.490 lat (usec): min=170, max=477, avg=195.68, stdev=19.84 00:11:01.490 clat percentiles (usec): 00:11:01.490 | 1.00th=[ 163], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 174], 00:11:01.490 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 186], 00:11:01.490 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 200], 95.00th=[ 206], 00:11:01.490 | 99.00th=[ 225], 99.50th=[ 285], 99.90th=[ 469], 99.95th=[ 469], 00:11:01.490 | 99.99th=[ 469] 00:11:01.490 bw ( KiB/s): min= 4096, max= 4096, per=17.23%, avg=4096.00, stdev= 0.00, samples=1 00:11:01.490 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:01.490 lat (usec) : 250=95.32%, 500=0.56% 00:11:01.490 lat (msec) : 50=4.12% 00:11:01.490 cpu : usr=0.30%, sys=1.00%, ctx=534, majf=0, minf=2 00:11:01.490 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:01.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.490 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.490 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:01.490 job2: (groupid=0, jobs=1): err= 0: pid=2555568: Fri Dec 13 03:22:02 2024 00:11:01.490 read: IOPS=2016, BW=8068KiB/s (8262kB/s)(8076KiB/1001msec) 00:11:01.490 slat (nsec): min=6902, max=22976, avg=8088.56, stdev=1167.16 00:11:01.490 clat (usec): min=218, max=496, avg=271.56, stdev=23.57 00:11:01.490 lat (usec): min=227, max=503, avg=279.65, stdev=23.58 00:11:01.490 clat percentiles (usec): 00:11:01.490 | 1.00th=[ 237], 5.00th=[ 245], 10.00th=[ 249], 20.00th=[ 258], 00:11:01.490 | 30.00th=[ 262], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:11:01.490 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 289], 95.00th=[ 297], 00:11:01.490 | 99.00th=[ 383], 99.50th=[ 449], 99.90th=[ 478], 99.95th=[ 482], 00:11:01.490 | 99.99th=[ 498] 00:11:01.490 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:01.490 slat (nsec): min=9834, max=41926, avg=11129.66, stdev=1920.96 00:11:01.490 clat (usec): min=141, max=336, avg=195.56, stdev=30.91 00:11:01.490 lat (usec): min=151, max=378, avg=206.69, stdev=31.11 00:11:01.490 clat percentiles (usec): 00:11:01.490 | 1.00th=[ 155], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 178], 00:11:01.490 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 192], 00:11:01.490 | 70.00th=[ 196], 80.00th=[ 204], 90.00th=[ 227], 95.00th=[ 281], 00:11:01.490 | 99.00th=[ 297], 99.50th=[ 302], 99.90th=[ 306], 99.95th=[ 310], 00:11:01.490 | 99.99th=[ 338] 00:11:01.490 bw ( KiB/s): min= 8192, max= 8192, per=34.47%, avg=8192.00, stdev= 0.00, samples=1 00:11:01.490 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:01.490 lat (usec) : 250=51.27%, 500=48.73% 00:11:01.490 cpu : usr=3.80%, sys=5.90%, ctx=4067, majf=0, minf=1 00:11:01.490 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:01.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.490 issued rwts: total=2019,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.490 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:01.490 job3: (groupid=0, jobs=1): err= 0: pid=2555571: Fri Dec 13 03:22:02 2024 00:11:01.490 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:01.490 slat (nsec): min=6987, max=48719, avg=8444.01, stdev=3099.37 00:11:01.490 clat (usec): min=208, max=41330, avg=352.86, stdev=1050.28 00:11:01.490 lat (usec): min=216, max=41346, avg=361.31, stdev=1050.48 00:11:01.490 clat percentiles (usec): 00:11:01.490 | 1.00th=[ 223], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 247], 00:11:01.490 | 30.00th=[ 260], 40.00th=[ 285], 50.00th=[ 310], 60.00th=[ 322], 00:11:01.490 | 70.00th=[ 355], 80.00th=[ 388], 90.00th=[ 449], 95.00th=[ 494], 00:11:01.490 | 99.00th=[ 652], 99.50th=[ 660], 99.90th=[ 668], 99.95th=[41157], 00:11:01.490 | 99.99th=[41157] 00:11:01.490 write: IOPS=2027, BW=8112KiB/s (8307kB/s)(8120KiB/1001msec); 0 zone resets 00:11:01.490 slat (nsec): min=9603, max=38507, avg=10794.51, stdev=1652.28 00:11:01.490 clat (usec): min=139, max=475, avg=204.39, stdev=38.80 00:11:01.490 lat (usec): min=150, max=514, avg=215.18, stdev=39.00 00:11:01.490 clat percentiles (usec): 00:11:01.490 | 1.00th=[ 147], 5.00th=[ 157], 10.00th=[ 165], 20.00th=[ 174], 00:11:01.490 | 30.00th=[ 184], 40.00th=[ 194], 50.00th=[ 200], 60.00th=[ 206], 00:11:01.490 | 70.00th=[ 217], 80.00th=[ 227], 90.00th=[ 241], 95.00th=[ 273], 00:11:01.490 | 99.00th=[ 351], 99.50th=[ 355], 99.90th=[ 371], 99.95th=[ 383], 00:11:01.490 | 99.99th=[ 478] 00:11:01.490 bw ( KiB/s): min= 8192, max= 8192, per=34.47%, avg=8192.00, stdev= 0.00, samples=1 00:11:01.490 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:01.490 lat (usec) : 250=61.92%, 500=36.09%, 750=1.96% 00:11:01.490 lat (msec) : 50=0.03% 00:11:01.490 cpu : usr=2.20%, sys=3.30%, ctx=3567, majf=0, minf=1 00:11:01.490 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:01.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.490 issued rwts: total=1536,2030,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.490 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:01.490 00:11:01.490 Run status group 0 (all jobs): 00:11:01.490 READ: bw=17.8MiB/s (18.6MB/s), 87.7KiB/s-8068KiB/s (89.8kB/s-8262kB/s), io=18.3MiB (19.2MB), run=1001-1031msec 00:11:01.490 WRITE: bw=23.2MiB/s (24.3MB/s), 2042KiB/s-8184KiB/s (2091kB/s-8380kB/s), io=23.9MiB (25.1MB), run=1001-1031msec 00:11:01.490 00:11:01.490 Disk stats (read/write): 00:11:01.490 nvme0n1: ios=1127/1536, merge=0/0, ticks=1316/297, in_queue=1613, util=85.36% 00:11:01.490 nvme0n2: ios=68/512, merge=0/0, ticks=806/91, in_queue=897, util=90.50% 00:11:01.490 nvme0n3: ios=1593/1962, merge=0/0, ticks=479/365, in_queue=844, util=94.77% 00:11:01.490 nvme0n4: ios=1382/1536, merge=0/0, ticks=852/319, in_queue=1171, util=94.30% 00:11:01.490 03:22:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:01.490 [global] 00:11:01.490 thread=1 00:11:01.490 invalidate=1 00:11:01.490 rw=write 00:11:01.490 time_based=1 00:11:01.490 runtime=1 00:11:01.490 ioengine=libaio 00:11:01.490 direct=1 00:11:01.490 bs=4096 00:11:01.490 iodepth=128 00:11:01.490 norandommap=0 00:11:01.490 numjobs=1 00:11:01.490 00:11:01.490 verify_dump=1 00:11:01.490 verify_backlog=512 00:11:01.490 verify_state_save=0 00:11:01.490 do_verify=1 00:11:01.490 verify=crc32c-intel 00:11:01.490 [job0] 00:11:01.490 filename=/dev/nvme0n1 00:11:01.490 [job1] 00:11:01.490 filename=/dev/nvme0n2 00:11:01.490 [job2] 00:11:01.490 filename=/dev/nvme0n3 00:11:01.490 [job3] 00:11:01.490 filename=/dev/nvme0n4 00:11:01.490 Could not set queue depth (nvme0n1) 00:11:01.490 Could not set queue depth (nvme0n2) 00:11:01.490 Could not set queue depth (nvme0n3) 00:11:01.490 Could not set queue depth (nvme0n4) 00:11:01.748 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:01.748 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:01.748 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:01.748 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:01.748 fio-3.35 00:11:01.748 Starting 4 threads 00:11:03.119 00:11:03.119 job0: (groupid=0, jobs=1): err= 0: pid=2555939: Fri Dec 13 03:22:03 2024 00:11:03.119 read: IOPS=3301, BW=12.9MiB/s (13.5MB/s)(13.0MiB/1007msec) 00:11:03.119 slat (nsec): min=1128, max=45125k, avg=137155.49, stdev=1277424.12 00:11:03.119 clat (usec): min=1155, max=81022, avg=19004.28, stdev=14409.71 00:11:03.119 lat (usec): min=4592, max=81047, avg=19141.44, stdev=14510.49 00:11:03.119 clat percentiles (usec): 00:11:03.119 | 1.00th=[ 5800], 5.00th=[ 8225], 10.00th=[ 8717], 20.00th=[10421], 00:11:03.119 | 30.00th=[11469], 40.00th=[11731], 50.00th=[12256], 60.00th=[14091], 00:11:03.119 | 70.00th=[15795], 80.00th=[26084], 90.00th=[45876], 95.00th=[50594], 00:11:03.119 | 99.00th=[64750], 99.50th=[71828], 99.90th=[71828], 99.95th=[76022], 00:11:03.119 | 99.99th=[81265] 00:11:03.119 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:11:03.119 slat (usec): min=2, max=11817, avg=122.47, stdev=732.79 00:11:03.119 clat (usec): min=618, max=77866, avg=18011.22, stdev=15013.19 00:11:03.119 lat (usec): min=626, max=77876, avg=18133.69, stdev=15119.34 00:11:03.119 clat percentiles (usec): 00:11:03.119 | 1.00th=[ 2114], 5.00th=[ 3916], 10.00th=[ 5604], 20.00th=[ 8848], 00:11:03.119 | 30.00th=[ 9896], 40.00th=[11863], 50.00th=[12518], 60.00th=[13960], 00:11:03.119 | 70.00th=[19792], 80.00th=[23200], 90.00th=[36963], 95.00th=[57410], 00:11:03.119 | 99.00th=[68682], 99.50th=[70779], 99.90th=[72877], 99.95th=[78119], 00:11:03.119 | 99.99th=[78119] 00:11:03.119 bw ( KiB/s): min=12288, max=16384, per=20.96%, avg=14336.00, stdev=2896.31, samples=2 00:11:03.119 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:11:03.119 lat (usec) : 750=0.04% 00:11:03.119 lat (msec) : 2=0.17%, 4=2.45%, 10=19.97%, 20=50.79%, 50=19.45% 00:11:03.119 lat (msec) : 100=7.12% 00:11:03.119 cpu : usr=2.09%, sys=4.08%, ctx=348, majf=0, minf=2 00:11:03.119 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:03.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.119 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:03.119 issued rwts: total=3325,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.119 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:03.119 job1: (groupid=0, jobs=1): err= 0: pid=2555940: Fri Dec 13 03:22:03 2024 00:11:03.119 read: IOPS=5428, BW=21.2MiB/s (22.2MB/s)(21.3MiB/1006msec) 00:11:03.119 slat (nsec): min=1415, max=12340k, avg=87983.22, stdev=680873.25 00:11:03.119 clat (usec): min=3050, max=23506, avg=11716.56, stdev=2752.61 00:11:03.119 lat (usec): min=3831, max=28787, avg=11804.54, stdev=2815.48 00:11:03.120 clat percentiles (usec): 00:11:03.120 | 1.00th=[ 6456], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9765], 00:11:03.120 | 30.00th=[10159], 40.00th=[10421], 50.00th=[11076], 60.00th=[11994], 00:11:03.120 | 70.00th=[12518], 80.00th=[13566], 90.00th=[15270], 95.00th=[17695], 00:11:03.120 | 99.00th=[19530], 99.50th=[20055], 99.90th=[20579], 99.95th=[21890], 00:11:03.120 | 99.99th=[23462] 00:11:03.120 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:11:03.120 slat (usec): min=2, max=11441, avg=77.53, stdev=552.27 00:11:03.120 clat (usec): min=2572, max=51847, avg=11282.67, stdev=5180.42 00:11:03.120 lat (usec): min=2582, max=51856, avg=11360.20, stdev=5222.53 00:11:03.120 clat percentiles (usec): 00:11:03.120 | 1.00th=[ 3785], 5.00th=[ 5866], 10.00th=[ 7635], 20.00th=[ 9372], 00:11:03.120 | 30.00th=[10028], 40.00th=[10421], 50.00th=[10814], 60.00th=[11076], 00:11:03.120 | 70.00th=[11207], 80.00th=[11600], 90.00th=[12780], 95.00th=[17695], 00:11:03.120 | 99.00th=[36439], 99.50th=[43779], 99.90th=[51119], 99.95th=[51119], 00:11:03.120 | 99.99th=[51643] 00:11:03.120 bw ( KiB/s): min=20752, max=24304, per=32.94%, avg=22528.00, stdev=2511.64, samples=2 00:11:03.120 iops : min= 5188, max= 6076, avg=5632.00, stdev=627.91, samples=2 00:11:03.120 lat (msec) : 4=0.70%, 10=26.67%, 20=70.02%, 50=2.53%, 100=0.07% 00:11:03.120 cpu : usr=5.07%, sys=6.57%, ctx=494, majf=0, minf=1 00:11:03.120 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:03.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:03.120 issued rwts: total=5461,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.120 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:03.120 job2: (groupid=0, jobs=1): err= 0: pid=2555941: Fri Dec 13 03:22:03 2024 00:11:03.120 read: IOPS=3788, BW=14.8MiB/s (15.5MB/s)(14.8MiB/1001msec) 00:11:03.120 slat (nsec): min=1502, max=11569k, avg=118953.81, stdev=752916.94 00:11:03.120 clat (usec): min=457, max=40075, avg=15031.69, stdev=5025.09 00:11:03.120 lat (usec): min=4232, max=40077, avg=15150.64, stdev=5067.66 00:11:03.120 clat percentiles (usec): 00:11:03.120 | 1.00th=[ 4490], 5.00th=[ 9110], 10.00th=[11076], 20.00th=[12125], 00:11:03.120 | 30.00th=[12780], 40.00th=[13173], 50.00th=[13829], 60.00th=[14222], 00:11:03.120 | 70.00th=[16450], 80.00th=[17957], 90.00th=[20055], 95.00th=[24511], 00:11:03.120 | 99.00th=[36439], 99.50th=[38536], 99.90th=[40109], 99.95th=[40109], 00:11:03.120 | 99.99th=[40109] 00:11:03.120 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:11:03.120 slat (usec): min=2, max=11237, avg=128.00, stdev=672.74 00:11:03.120 clat (usec): min=1577, max=75251, avg=17079.70, stdev=12930.27 00:11:03.120 lat (usec): min=1592, max=75255, avg=17207.71, stdev=13005.13 00:11:03.120 clat percentiles (usec): 00:11:03.120 | 1.00th=[ 3916], 5.00th=[ 6259], 10.00th=[ 7898], 20.00th=[11731], 00:11:03.120 | 30.00th=[12387], 40.00th=[12649], 50.00th=[13173], 60.00th=[14222], 00:11:03.120 | 70.00th=[14615], 80.00th=[17433], 90.00th=[28443], 95.00th=[51119], 00:11:03.120 | 99.00th=[71828], 99.50th=[73925], 99.90th=[74974], 99.95th=[74974], 00:11:03.120 | 99.99th=[74974] 00:11:03.120 bw ( KiB/s): min=12576, max=20192, per=23.95%, avg=16384.00, stdev=5385.33, samples=2 00:11:03.120 iops : min= 3144, max= 5048, avg=4096.00, stdev=1346.33, samples=2 00:11:03.120 lat (usec) : 500=0.01% 00:11:03.120 lat (msec) : 2=0.14%, 4=0.38%, 10=10.22%, 20=74.81%, 50=11.74% 00:11:03.120 lat (msec) : 100=2.70% 00:11:03.120 cpu : usr=3.00%, sys=5.19%, ctx=495, majf=0, minf=1 00:11:03.120 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:03.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:03.120 issued rwts: total=3792,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.120 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:03.120 job3: (groupid=0, jobs=1): err= 0: pid=2555942: Fri Dec 13 03:22:03 2024 00:11:03.120 read: IOPS=3967, BW=15.5MiB/s (16.3MB/s)(16.2MiB/1048msec) 00:11:03.120 slat (nsec): min=1675, max=11390k, avg=111893.06, stdev=722889.87 00:11:03.120 clat (usec): min=5334, max=75048, avg=14463.01, stdev=6489.85 00:11:03.120 lat (usec): min=5343, max=75053, avg=14574.90, stdev=6559.21 00:11:03.120 clat percentiles (usec): 00:11:03.120 | 1.00th=[ 7701], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[11731], 00:11:03.120 | 30.00th=[11994], 40.00th=[12649], 50.00th=[13042], 60.00th=[14091], 00:11:03.120 | 70.00th=[14746], 80.00th=[15926], 90.00th=[18220], 95.00th=[21627], 00:11:03.120 | 99.00th=[55313], 99.50th=[65799], 99.90th=[74974], 99.95th=[74974], 00:11:03.120 | 99.99th=[74974] 00:11:03.120 write: IOPS=4396, BW=17.2MiB/s (18.0MB/s)(18.0MiB/1048msec); 0 zone resets 00:11:03.120 slat (usec): min=2, max=10864, avg=109.73, stdev=684.84 00:11:03.120 clat (msec): min=3, max=107, avg=15.79, stdev=11.75 00:11:03.120 lat (msec): min=3, max=107, avg=15.90, stdev=11.82 00:11:03.120 clat percentiles (msec): 00:11:03.120 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 12], 00:11:03.120 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 13], 60.00th=[ 14], 00:11:03.120 | 70.00th=[ 15], 80.00th=[ 17], 90.00th=[ 24], 95.00th=[ 30], 00:11:03.120 | 99.00th=[ 83], 99.50th=[ 90], 99.90th=[ 108], 99.95th=[ 108], 00:11:03.120 | 99.99th=[ 108] 00:11:03.120 bw ( KiB/s): min=15856, max=20480, per=26.56%, avg=18168.00, stdev=3269.66, samples=2 00:11:03.120 iops : min= 3964, max= 5120, avg=4542.00, stdev=817.42, samples=2 00:11:03.120 lat (msec) : 4=0.07%, 10=9.33%, 20=79.26%, 50=9.25%, 100=1.92% 00:11:03.120 lat (msec) : 250=0.17% 00:11:03.120 cpu : usr=3.53%, sys=6.30%, ctx=414, majf=0, minf=1 00:11:03.120 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:03.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:03.120 issued rwts: total=4158,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.120 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:03.120 00:11:03.120 Run status group 0 (all jobs): 00:11:03.120 READ: bw=62.4MiB/s (65.4MB/s), 12.9MiB/s-21.2MiB/s (13.5MB/s-22.2MB/s), io=65.4MiB (68.6MB), run=1001-1048msec 00:11:03.120 WRITE: bw=66.8MiB/s (70.0MB/s), 13.9MiB/s-21.9MiB/s (14.6MB/s-22.9MB/s), io=70.0MiB (73.4MB), run=1001-1048msec 00:11:03.120 00:11:03.120 Disk stats (read/write): 00:11:03.120 nvme0n1: ios=2586/2927, merge=0/0, ticks=37989/43955, in_queue=81944, util=86.07% 00:11:03.120 nvme0n2: ios=4657/4759, merge=0/0, ticks=52360/50585, in_queue=102945, util=90.06% 00:11:03.120 nvme0n3: ios=3129/3343, merge=0/0, ticks=39921/52856, in_queue=92777, util=93.04% 00:11:03.120 nvme0n4: ios=4058/4096, merge=0/0, ticks=45521/44743, in_queue=90264, util=95.18% 00:11:03.120 03:22:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:03.120 [global] 00:11:03.120 thread=1 00:11:03.120 invalidate=1 00:11:03.120 rw=randwrite 00:11:03.120 time_based=1 00:11:03.120 runtime=1 00:11:03.120 ioengine=libaio 00:11:03.120 direct=1 00:11:03.120 bs=4096 00:11:03.120 iodepth=128 00:11:03.120 norandommap=0 00:11:03.120 numjobs=1 00:11:03.120 00:11:03.120 verify_dump=1 00:11:03.120 verify_backlog=512 00:11:03.120 verify_state_save=0 00:11:03.120 do_verify=1 00:11:03.120 verify=crc32c-intel 00:11:03.120 [job0] 00:11:03.120 filename=/dev/nvme0n1 00:11:03.120 [job1] 00:11:03.120 filename=/dev/nvme0n2 00:11:03.120 [job2] 00:11:03.120 filename=/dev/nvme0n3 00:11:03.120 [job3] 00:11:03.120 filename=/dev/nvme0n4 00:11:03.120 Could not set queue depth (nvme0n1) 00:11:03.120 Could not set queue depth (nvme0n2) 00:11:03.120 Could not set queue depth (nvme0n3) 00:11:03.120 Could not set queue depth (nvme0n4) 00:11:03.378 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:03.378 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:03.378 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:03.378 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:03.378 fio-3.35 00:11:03.378 Starting 4 threads 00:11:04.748 00:11:04.748 job0: (groupid=0, jobs=1): err= 0: pid=2556309: Fri Dec 13 03:22:05 2024 00:11:04.748 read: IOPS=4181, BW=16.3MiB/s (17.1MB/s)(16.5MiB/1010msec) 00:11:04.748 slat (nsec): min=1064, max=16804k, avg=100156.43, stdev=770459.78 00:11:04.748 clat (usec): min=2802, max=39806, avg=14533.90, stdev=5336.82 00:11:04.748 lat (usec): min=2810, max=39815, avg=14634.05, stdev=5382.01 00:11:04.748 clat percentiles (usec): 00:11:04.748 | 1.00th=[ 3425], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[11207], 00:11:04.748 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12518], 60.00th=[14484], 00:11:04.748 | 70.00th=[16319], 80.00th=[18220], 90.00th=[20317], 95.00th=[22414], 00:11:04.748 | 99.00th=[38011], 99.50th=[39584], 99.90th=[39584], 99.95th=[39584], 00:11:04.748 | 99.99th=[39584] 00:11:04.748 write: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec); 0 zone resets 00:11:04.748 slat (nsec): min=1886, max=12713k, avg=86839.00, stdev=647156.82 00:11:04.748 clat (usec): min=1111, max=53007, avg=14526.00, stdev=9606.79 00:11:04.748 lat (usec): min=1121, max=53016, avg=14612.84, stdev=9663.32 00:11:04.748 clat percentiles (usec): 00:11:04.748 | 1.00th=[ 2802], 5.00th=[ 4424], 10.00th=[ 7177], 20.00th=[ 8356], 00:11:04.748 | 30.00th=[10159], 40.00th=[11600], 50.00th=[11994], 60.00th=[12387], 00:11:04.748 | 70.00th=[14222], 80.00th=[16450], 90.00th=[25035], 95.00th=[39584], 00:11:04.748 | 99.00th=[47973], 99.50th=[52167], 99.90th=[53216], 99.95th=[53216], 00:11:04.748 | 99.99th=[53216] 00:11:04.748 bw ( KiB/s): min=16056, max=20800, per=24.93%, avg=18428.00, stdev=3354.51, samples=2 00:11:04.748 iops : min= 4014, max= 5200, avg=4607.00, stdev=838.63, samples=2 00:11:04.748 lat (msec) : 2=0.11%, 4=1.80%, 10=19.66%, 20=64.24%, 50=13.87% 00:11:04.748 lat (msec) : 100=0.32% 00:11:04.748 cpu : usr=2.78%, sys=4.96%, ctx=357, majf=0, minf=1 00:11:04.748 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:04.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:04.748 issued rwts: total=4223,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.748 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:04.748 job1: (groupid=0, jobs=1): err= 0: pid=2556310: Fri Dec 13 03:22:05 2024 00:11:04.748 read: IOPS=5289, BW=20.7MiB/s (21.7MB/s)(20.7MiB/1004msec) 00:11:04.748 slat (nsec): min=1165, max=10803k, avg=95898.85, stdev=654684.04 00:11:04.748 clat (usec): min=2862, max=44988, avg=12513.40, stdev=4612.35 00:11:04.748 lat (usec): min=4893, max=44991, avg=12609.30, stdev=4636.51 00:11:04.748 clat percentiles (usec): 00:11:04.748 | 1.00th=[ 5735], 5.00th=[ 8029], 10.00th=[ 8979], 20.00th=[10159], 00:11:04.748 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11600], 60.00th=[12125], 00:11:04.748 | 70.00th=[12649], 80.00th=[14091], 90.00th=[17171], 95.00th=[19530], 00:11:04.748 | 99.00th=[39584], 99.50th=[39584], 99.90th=[40109], 99.95th=[40109], 00:11:04.748 | 99.99th=[44827] 00:11:04.748 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:11:04.748 slat (nsec): min=1803, max=28636k, avg=81152.63, stdev=565270.53 00:11:04.748 clat (usec): min=1671, max=28675, avg=10747.55, stdev=2182.01 00:11:04.748 lat (usec): min=1682, max=33807, avg=10828.70, stdev=2232.87 00:11:04.748 clat percentiles (usec): 00:11:04.748 | 1.00th=[ 4424], 5.00th=[ 5932], 10.00th=[ 7373], 20.00th=[10028], 00:11:04.748 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:11:04.748 | 70.00th=[11338], 80.00th=[12256], 90.00th=[12649], 95.00th=[13566], 00:11:04.748 | 99.00th=[16057], 99.50th=[16319], 99.90th=[22152], 99.95th=[22414], 00:11:04.748 | 99.99th=[28705] 00:11:04.748 bw ( KiB/s): min=21264, max=23792, per=30.48%, avg=22528.00, stdev=1787.57, samples=2 00:11:04.748 iops : min= 5316, max= 5948, avg=5632.00, stdev=446.89, samples=2 00:11:04.748 lat (msec) : 2=0.03%, 4=0.38%, 10=17.44%, 20=79.91%, 50=2.24% 00:11:04.748 cpu : usr=3.19%, sys=5.88%, ctx=606, majf=0, minf=1 00:11:04.748 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:04.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:04.748 issued rwts: total=5311,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.748 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:04.748 job2: (groupid=0, jobs=1): err= 0: pid=2556311: Fri Dec 13 03:22:05 2024 00:11:04.748 read: IOPS=3041, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1010msec) 00:11:04.748 slat (nsec): min=1671, max=20260k, avg=116483.74, stdev=737790.30 00:11:04.748 clat (usec): min=10727, max=35533, avg=15965.05, stdev=4775.33 00:11:04.748 lat (usec): min=10970, max=47222, avg=16081.54, stdev=4805.91 00:11:04.748 clat percentiles (usec): 00:11:04.748 | 1.00th=[11076], 5.00th=[12125], 10.00th=[12780], 20.00th=[13566], 00:11:04.748 | 30.00th=[13960], 40.00th=[14353], 50.00th=[14746], 60.00th=[15139], 00:11:04.748 | 70.00th=[15533], 80.00th=[16188], 90.00th=[19006], 95.00th=[29230], 00:11:04.748 | 99.00th=[34866], 99.50th=[35390], 99.90th=[35390], 99.95th=[35390], 00:11:04.748 | 99.99th=[35390] 00:11:04.748 write: IOPS=3388, BW=13.2MiB/s (13.9MB/s)(13.4MiB/1010msec); 0 zone resets 00:11:04.748 slat (usec): min=2, max=33288, avg=182.23, stdev=1356.44 00:11:04.748 clat (usec): min=1166, max=69510, avg=22820.87, stdev=14139.63 00:11:04.748 lat (usec): min=1791, max=73277, avg=23003.10, stdev=14238.40 00:11:04.748 clat percentiles (usec): 00:11:04.749 | 1.00th=[ 5276], 5.00th=[10421], 10.00th=[11207], 20.00th=[13698], 00:11:04.749 | 30.00th=[13960], 40.00th=[14353], 50.00th=[14746], 60.00th=[19006], 00:11:04.749 | 70.00th=[25035], 80.00th=[32900], 90.00th=[44303], 95.00th=[55313], 00:11:04.749 | 99.00th=[61604], 99.50th=[62129], 99.90th=[69731], 99.95th=[69731], 00:11:04.749 | 99.99th=[69731] 00:11:04.749 bw ( KiB/s): min=11512, max=14840, per=17.82%, avg=13176.00, stdev=2353.25, samples=2 00:11:04.749 iops : min= 2878, max= 3710, avg=3294.00, stdev=588.31, samples=2 00:11:04.749 lat (msec) : 2=0.05%, 10=2.22%, 20=72.42%, 50=21.59%, 100=3.73% 00:11:04.749 cpu : usr=2.87%, sys=4.56%, ctx=298, majf=0, minf=1 00:11:04.749 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:11:04.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:04.749 issued rwts: total=3072,3422,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.749 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:04.749 job3: (groupid=0, jobs=1): err= 0: pid=2556312: Fri Dec 13 03:22:05 2024 00:11:04.749 read: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec) 00:11:04.749 slat (nsec): min=1649, max=14877k, avg=113725.50, stdev=829073.68 00:11:04.749 clat (usec): min=4917, max=29705, avg=14042.39, stdev=3588.61 00:11:04.749 lat (usec): min=4923, max=30473, avg=14156.12, stdev=3652.73 00:11:04.749 clat percentiles (usec): 00:11:04.749 | 1.00th=[ 7832], 5.00th=[ 9765], 10.00th=[10814], 20.00th=[11600], 00:11:04.749 | 30.00th=[12125], 40.00th=[12518], 50.00th=[13435], 60.00th=[14091], 00:11:04.749 | 70.00th=[15008], 80.00th=[15795], 90.00th=[17695], 95.00th=[21103], 00:11:04.749 | 99.00th=[26608], 99.50th=[27657], 99.90th=[29754], 99.95th=[29754], 00:11:04.749 | 99.99th=[29754] 00:11:04.749 write: IOPS=4967, BW=19.4MiB/s (20.3MB/s)(19.6MiB/1011msec); 0 zone resets 00:11:04.749 slat (usec): min=2, max=9995, avg=89.24, stdev=440.80 00:11:04.749 clat (usec): min=1878, max=28619, avg=12612.60, stdev=2893.48 00:11:04.749 lat (usec): min=2913, max=28623, avg=12701.83, stdev=2923.68 00:11:04.749 clat percentiles (usec): 00:11:04.749 | 1.00th=[ 4113], 5.00th=[ 7308], 10.00th=[ 8848], 20.00th=[10945], 00:11:04.749 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12649], 60.00th=[12911], 00:11:04.749 | 70.00th=[14222], 80.00th=[14484], 90.00th=[15664], 95.00th=[16909], 00:11:04.749 | 99.00th=[19268], 99.50th=[21890], 99.90th=[26346], 99.95th=[28181], 00:11:04.749 | 99.99th=[28705] 00:11:04.749 bw ( KiB/s): min=18680, max=20480, per=26.49%, avg=19580.00, stdev=1272.79, samples=2 00:11:04.749 iops : min= 4670, max= 5120, avg=4895.00, stdev=318.20, samples=2 00:11:04.749 lat (msec) : 2=0.01%, 4=0.39%, 10=9.73%, 20=85.69%, 50=4.17% 00:11:04.749 cpu : usr=3.17%, sys=5.94%, ctx=560, majf=0, minf=1 00:11:04.749 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:04.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:04.749 issued rwts: total=4608,5022,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.749 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:04.749 00:11:04.749 Run status group 0 (all jobs): 00:11:04.749 READ: bw=66.5MiB/s (69.7MB/s), 11.9MiB/s-20.7MiB/s (12.5MB/s-21.7MB/s), io=67.2MiB (70.5MB), run=1004-1011msec 00:11:04.749 WRITE: bw=72.2MiB/s (75.7MB/s), 13.2MiB/s-21.9MiB/s (13.9MB/s-23.0MB/s), io=73.0MiB (76.5MB), run=1004-1011msec 00:11:04.749 00:11:04.749 Disk stats (read/write): 00:11:04.749 nvme0n1: ios=3634/3892, merge=0/0, ticks=42340/51603, in_queue=93943, util=87.37% 00:11:04.749 nvme0n2: ios=4658/4647, merge=0/0, ticks=45946/36530, in_queue=82476, util=91.37% 00:11:04.749 nvme0n3: ios=2735/3072, merge=0/0, ticks=16137/27135, in_queue=43272, util=94.91% 00:11:04.749 nvme0n4: ios=3956/4096, merge=0/0, ticks=45751/39530, in_queue=85281, util=94.35% 00:11:04.749 03:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:04.749 03:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2556540 00:11:04.749 03:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:04.749 03:22:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:04.749 [global] 00:11:04.749 thread=1 00:11:04.749 invalidate=1 00:11:04.749 rw=read 00:11:04.749 time_based=1 00:11:04.749 runtime=10 00:11:04.749 ioengine=libaio 00:11:04.749 direct=1 00:11:04.749 bs=4096 00:11:04.749 iodepth=1 00:11:04.749 norandommap=1 00:11:04.749 numjobs=1 00:11:04.749 00:11:04.749 [job0] 00:11:04.749 filename=/dev/nvme0n1 00:11:04.749 [job1] 00:11:04.749 filename=/dev/nvme0n2 00:11:04.749 [job2] 00:11:04.749 filename=/dev/nvme0n3 00:11:04.749 [job3] 00:11:04.749 filename=/dev/nvme0n4 00:11:04.749 Could not set queue depth (nvme0n1) 00:11:04.749 Could not set queue depth (nvme0n2) 00:11:04.749 Could not set queue depth (nvme0n3) 00:11:04.749 Could not set queue depth (nvme0n4) 00:11:04.749 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.749 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.749 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.749 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.749 fio-3.35 00:11:04.749 Starting 4 threads 00:11:08.023 03:22:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:08.023 03:22:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:08.023 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=274432, buflen=4096 00:11:08.023 fio: pid=2556680, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:08.023 03:22:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:08.023 03:22:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:08.023 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=303104, buflen=4096 00:11:08.023 fio: pid=2556679, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:08.023 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=323584, buflen=4096 00:11:08.023 fio: pid=2556677, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:08.280 03:22:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:08.280 03:22:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:08.537 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=10043392, buflen=4096 00:11:08.537 fio: pid=2556678, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:08.537 03:22:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:08.537 03:22:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:08.537 00:11:08.537 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2556677: Fri Dec 13 03:22:09 2024 00:11:08.537 read: IOPS=25, BW=101KiB/s (103kB/s)(316KiB/3140msec) 00:11:08.537 slat (nsec): min=10436, max=73773, avg=24568.84, stdev=7255.23 00:11:08.537 clat (usec): min=230, max=41962, avg=39438.10, stdev=7824.49 00:11:08.537 lat (usec): min=252, max=41991, avg=39462.67, stdev=7824.37 00:11:08.537 clat percentiles (usec): 00:11:08.537 | 1.00th=[ 231], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:11:08.537 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:08.537 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:08.537 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:08.537 | 99.99th=[42206] 00:11:08.537 bw ( KiB/s): min= 93, max= 112, per=3.22%, avg=100.83, stdev= 7.11, samples=6 00:11:08.537 iops : min= 23, max= 28, avg=25.17, stdev= 1.83, samples=6 00:11:08.537 lat (usec) : 250=1.25%, 500=2.50% 00:11:08.537 lat (msec) : 50=95.00% 00:11:08.537 cpu : usr=0.00%, sys=0.13%, ctx=85, majf=0, minf=1 00:11:08.537 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.537 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.537 issued rwts: total=80,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.537 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.537 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2556678: Fri Dec 13 03:22:09 2024 00:11:08.537 read: IOPS=712, BW=2847KiB/s (2915kB/s)(9808KiB/3445msec) 00:11:08.537 slat (usec): min=6, max=15572, avg=34.67, stdev=583.54 00:11:08.537 clat (usec): min=185, max=42406, avg=1340.94, stdev=6597.82 00:11:08.537 lat (usec): min=208, max=42428, avg=1370.81, stdev=6618.33 00:11:08.537 clat percentiles (usec): 00:11:08.537 | 1.00th=[ 208], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 233], 00:11:08.537 | 30.00th=[ 237], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:11:08.537 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 277], 00:11:08.537 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:11:08.537 | 99.99th=[42206] 00:11:08.538 bw ( KiB/s): min= 96, max= 8671, per=51.64%, avg=1602.50, stdev=3467.60, samples=6 00:11:08.538 iops : min= 24, max= 2167, avg=400.50, stdev=866.60, samples=6 00:11:08.538 lat (usec) : 250=67.67%, 500=29.51%, 1000=0.08% 00:11:08.538 lat (msec) : 50=2.69% 00:11:08.538 cpu : usr=0.52%, sys=1.07%, ctx=2459, majf=0, minf=2 00:11:08.538 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.538 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.538 issued rwts: total=2453,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.538 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.538 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2556679: Fri Dec 13 03:22:09 2024 00:11:08.538 read: IOPS=25, BW=101KiB/s (103kB/s)(296KiB/2935msec) 00:11:08.538 slat (nsec): min=11895, max=66305, avg=24110.40, stdev=5359.84 00:11:08.538 clat (usec): min=282, max=41206, avg=39336.28, stdev=7993.79 00:11:08.538 lat (usec): min=305, max=41229, avg=39360.40, stdev=7993.22 00:11:08.538 clat percentiles (usec): 00:11:08.538 | 1.00th=[ 281], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:11:08.538 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:08.538 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:08.538 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:08.538 | 99.99th=[41157] 00:11:08.538 bw ( KiB/s): min= 96, max= 120, per=3.29%, avg=102.40, stdev=10.43, samples=5 00:11:08.538 iops : min= 24, max= 30, avg=25.60, stdev= 2.61, samples=5 00:11:08.538 lat (usec) : 500=1.33%, 750=1.33% 00:11:08.538 lat (msec) : 2=1.33%, 50=94.67% 00:11:08.538 cpu : usr=0.00%, sys=0.14%, ctx=78, majf=0, minf=2 00:11:08.538 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.538 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.538 issued rwts: total=75,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.538 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.538 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2556680: Fri Dec 13 03:22:09 2024 00:11:08.538 read: IOPS=24, BW=98.2KiB/s (101kB/s)(268KiB/2728msec) 00:11:08.538 slat (nsec): min=10850, max=32413, avg=22739.51, stdev=2127.38 00:11:08.538 clat (usec): min=406, max=41064, avg=40362.76, stdev=4955.57 00:11:08.538 lat (usec): min=439, max=41087, avg=40385.49, stdev=4954.37 00:11:08.538 clat percentiles (usec): 00:11:08.538 | 1.00th=[ 408], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:08.538 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:08.538 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:08.538 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:08.538 | 99.99th=[41157] 00:11:08.538 bw ( KiB/s): min= 96, max= 104, per=3.19%, avg=99.20, stdev= 4.38, samples=5 00:11:08.538 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:11:08.538 lat (usec) : 500=1.47% 00:11:08.538 lat (msec) : 50=97.06% 00:11:08.538 cpu : usr=0.11%, sys=0.00%, ctx=69, majf=0, minf=2 00:11:08.538 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.538 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.538 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.538 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.538 00:11:08.538 Run status group 0 (all jobs): 00:11:08.538 READ: bw=3102KiB/s (3177kB/s), 98.2KiB/s-2847KiB/s (101kB/s-2915kB/s), io=10.4MiB (10.9MB), run=2728-3445msec 00:11:08.538 00:11:08.538 Disk stats (read/write): 00:11:08.538 nvme0n1: ios=119/0, merge=0/0, ticks=4176/0, in_queue=4176, util=99.60% 00:11:08.538 nvme0n2: ios=2411/0, merge=0/0, ticks=3260/0, in_queue=3260, util=94.92% 00:11:08.538 nvme0n3: ios=119/0, merge=0/0, ticks=2980/0, in_queue=2980, util=99.26% 00:11:08.538 nvme0n4: ios=64/0, merge=0/0, ticks=2583/0, in_queue=2583, util=96.41% 00:11:08.794 03:22:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:08.794 03:22:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:09.050 03:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:09.050 03:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:09.307 03:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:09.307 03:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:09.564 03:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:09.564 03:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:09.822 03:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:09.822 03:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2556540 00:11:09.822 03:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:09.822 03:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:10.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.753 03:22:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:10.753 03:22:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:10.753 03:22:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:10.753 03:22:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:10.753 03:22:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:10.753 03:22:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:10.753 03:22:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:10.753 03:22:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:10.753 03:22:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:10.753 nvmf hotplug test: fio failed as expected 00:11:10.753 03:22:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:11.010 03:22:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:11.010 03:22:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:11.010 03:22:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:11.010 03:22:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:11.010 03:22:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:11.010 03:22:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:11.010 03:22:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:11.010 03:22:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:11.010 03:22:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:11.010 03:22:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:11.010 03:22:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:11.010 rmmod nvme_tcp 00:11:11.010 rmmod nvme_fabrics 00:11:11.010 rmmod nvme_keyring 00:11:11.010 03:22:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:11.010 03:22:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:11.010 03:22:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:11.010 03:22:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2553659 ']' 00:11:11.010 03:22:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2553659 00:11:11.010 03:22:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2553659 ']' 00:11:11.010 03:22:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2553659 00:11:11.010 03:22:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:11.010 03:22:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:11.010 03:22:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2553659 00:11:11.268 03:22:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:11.268 03:22:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:11.268 03:22:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2553659' 00:11:11.268 killing process with pid 2553659 00:11:11.268 03:22:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2553659 00:11:11.268 03:22:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2553659 00:11:12.205 03:22:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:12.205 03:22:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:12.465 03:22:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:12.465 03:22:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:12.465 03:22:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:12.465 03:22:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:12.465 03:22:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:12.465 03:22:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:12.465 03:22:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:12.465 03:22:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.465 03:22:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:12.465 03:22:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.370 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:14.370 00:11:14.370 real 0m29.907s 00:11:14.370 user 1m59.441s 00:11:14.370 sys 0m8.138s 00:11:14.370 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.370 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.370 ************************************ 00:11:14.370 END TEST nvmf_fio_target 00:11:14.370 ************************************ 00:11:14.370 03:22:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:14.370 03:22:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:14.370 03:22:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.370 03:22:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:14.370 ************************************ 00:11:14.370 START TEST nvmf_bdevio 00:11:14.370 ************************************ 00:11:14.371 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:14.630 * Looking for test storage... 00:11:14.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.630 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:14.630 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:11:14.630 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:14.630 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:14.630 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:14.630 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:14.630 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:14.630 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:14.630 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:14.630 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:14.630 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:14.630 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:14.630 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:14.630 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:14.630 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:14.630 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:14.630 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:14.630 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:14.630 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:14.630 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:14.630 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:14.630 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:14.630 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:14.630 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:14.630 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:14.630 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:14.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.631 --rc genhtml_branch_coverage=1 00:11:14.631 --rc genhtml_function_coverage=1 00:11:14.631 --rc genhtml_legend=1 00:11:14.631 --rc geninfo_all_blocks=1 00:11:14.631 --rc geninfo_unexecuted_blocks=1 00:11:14.631 00:11:14.631 ' 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:14.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.631 --rc genhtml_branch_coverage=1 00:11:14.631 --rc genhtml_function_coverage=1 00:11:14.631 --rc genhtml_legend=1 00:11:14.631 --rc geninfo_all_blocks=1 00:11:14.631 --rc geninfo_unexecuted_blocks=1 00:11:14.631 00:11:14.631 ' 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:14.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.631 --rc genhtml_branch_coverage=1 00:11:14.631 --rc genhtml_function_coverage=1 00:11:14.631 --rc genhtml_legend=1 00:11:14.631 --rc geninfo_all_blocks=1 00:11:14.631 --rc geninfo_unexecuted_blocks=1 00:11:14.631 00:11:14.631 ' 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:14.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.631 --rc genhtml_branch_coverage=1 00:11:14.631 --rc genhtml_function_coverage=1 00:11:14.631 --rc genhtml_legend=1 00:11:14.631 --rc geninfo_all_blocks=1 00:11:14.631 --rc geninfo_unexecuted_blocks=1 00:11:14.631 00:11:14.631 ' 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:14.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:14.631 03:22:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:21.206 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:21.206 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:21.206 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:21.207 Found net devices under 0000:af:00.0: cvl_0_0 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:21.207 Found net devices under 0000:af:00.1: cvl_0_1 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:21.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:21.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:11:21.207 00:11:21.207 --- 10.0.0.2 ping statistics --- 00:11:21.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.207 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:21.207 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:21.207 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:11:21.207 00:11:21.207 --- 10.0.0.1 ping statistics --- 00:11:21.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.207 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2561339 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2561339 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2561339 ']' 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.207 03:22:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.207 [2024-12-13 03:22:21.551468] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:21.207 [2024-12-13 03:22:21.551554] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.207 [2024-12-13 03:22:21.668924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:21.207 [2024-12-13 03:22:21.783107] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.207 [2024-12-13 03:22:21.783151] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.207 [2024-12-13 03:22:21.783162] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.207 [2024-12-13 03:22:21.783172] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.207 [2024-12-13 03:22:21.783180] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.207 [2024-12-13 03:22:21.785830] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:11:21.207 [2024-12-13 03:22:21.785960] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:11:21.207 [2024-12-13 03:22:21.786018] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:21.207 [2024-12-13 03:22:21.786041] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:11:21.207 03:22:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.207 03:22:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:21.207 03:22:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:21.207 03:22:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:21.207 03:22:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.207 03:22:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:21.207 03:22:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:21.207 03:22:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.207 03:22:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.207 [2024-12-13 03:22:22.395132] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:21.468 03:22:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.468 03:22:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:21.468 03:22:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.468 03:22:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.468 Malloc0 00:11:21.468 03:22:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.468 03:22:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:21.468 03:22:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.468 03:22:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.468 03:22:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.468 03:22:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:21.468 03:22:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.468 03:22:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.468 03:22:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.468 03:22:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:21.468 03:22:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.468 03:22:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.468 [2024-12-13 03:22:22.514554] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:21.468 03:22:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.468 03:22:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:21.468 03:22:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:21.468 03:22:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:21.468 03:22:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:21.468 03:22:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:21.468 03:22:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:21.468 { 00:11:21.468 "params": { 00:11:21.468 "name": "Nvme$subsystem", 00:11:21.468 "trtype": "$TEST_TRANSPORT", 00:11:21.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:21.468 "adrfam": "ipv4", 00:11:21.468 "trsvcid": "$NVMF_PORT", 00:11:21.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:21.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:21.468 "hdgst": ${hdgst:-false}, 00:11:21.468 "ddgst": ${ddgst:-false} 00:11:21.468 }, 00:11:21.468 "method": "bdev_nvme_attach_controller" 00:11:21.468 } 00:11:21.468 EOF 00:11:21.468 )") 00:11:21.468 03:22:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:21.468 03:22:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:21.468 03:22:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:21.468 03:22:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:21.468 "params": { 00:11:21.468 "name": "Nvme1", 00:11:21.468 "trtype": "tcp", 00:11:21.468 "traddr": "10.0.0.2", 00:11:21.468 "adrfam": "ipv4", 00:11:21.468 "trsvcid": "4420", 00:11:21.468 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:21.468 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:21.468 "hdgst": false, 00:11:21.468 "ddgst": false 00:11:21.468 }, 00:11:21.468 "method": "bdev_nvme_attach_controller" 00:11:21.468 }' 00:11:21.468 [2024-12-13 03:22:22.591873] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:21.468 [2024-12-13 03:22:22.591991] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2561536 ] 00:11:21.727 [2024-12-13 03:22:22.707426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:21.727 [2024-12-13 03:22:22.826612] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.727 [2024-12-13 03:22:22.826679] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.727 [2024-12-13 03:22:22.826684] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:22.295 I/O targets: 00:11:22.295 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:22.295 00:11:22.295 00:11:22.295 CUnit - A unit testing framework for C - Version 2.1-3 00:11:22.295 http://cunit.sourceforge.net/ 00:11:22.295 00:11:22.295 00:11:22.295 Suite: bdevio tests on: Nvme1n1 00:11:22.295 Test: blockdev write read block ...passed 00:11:22.295 Test: blockdev write zeroes read block ...passed 00:11:22.295 Test: blockdev write zeroes read no split ...passed 00:11:22.295 Test: blockdev write zeroes read split ...passed 00:11:22.295 Test: blockdev write zeroes read split partial ...passed 00:11:22.295 Test: blockdev reset ...[2024-12-13 03:22:23.474379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:22.295 [2024-12-13 03:22:23.474495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326480 (9): Bad file descriptor 00:11:22.554 [2024-12-13 03:22:23.578719] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:22.554 passed 00:11:22.554 Test: blockdev write read 8 blocks ...passed 00:11:22.554 Test: blockdev write read size > 128k ...passed 00:11:22.554 Test: blockdev write read invalid size ...passed 00:11:22.554 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:22.554 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:22.554 Test: blockdev write read max offset ...passed 00:11:22.554 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:22.813 Test: blockdev writev readv 8 blocks ...passed 00:11:22.813 Test: blockdev writev readv 30 x 1block ...passed 00:11:22.813 Test: blockdev writev readv block ...passed 00:11:22.813 Test: blockdev writev readv size > 128k ...passed 00:11:22.813 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:22.813 Test: blockdev comparev and writev ...[2024-12-13 03:22:23.876882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.813 [2024-12-13 03:22:23.876940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:22.813 [2024-12-13 03:22:23.876964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.813 [2024-12-13 03:22:23.876978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:22.813 [2024-12-13 03:22:23.877267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.813 [2024-12-13 03:22:23.877286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:22.813 [2024-12-13 03:22:23.877307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.813 [2024-12-13 03:22:23.877321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:22.813 [2024-12-13 03:22:23.877598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.813 [2024-12-13 03:22:23.877615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:22.813 [2024-12-13 03:22:23.877632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.813 [2024-12-13 03:22:23.877647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:22.813 [2024-12-13 03:22:23.877944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.813 [2024-12-13 03:22:23.877963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:22.813 [2024-12-13 03:22:23.877981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.813 [2024-12-13 03:22:23.877994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:22.813 passed 00:11:22.813 Test: blockdev nvme passthru rw ...passed 00:11:22.813 Test: blockdev nvme passthru vendor specific ...[2024-12-13 03:22:23.960355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:22.813 [2024-12-13 03:22:23.960387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:22.813 [2024-12-13 03:22:23.960533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:22.813 [2024-12-13 03:22:23.960547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:22.813 [2024-12-13 03:22:23.960671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:22.813 [2024-12-13 03:22:23.960685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:22.813 [2024-12-13 03:22:23.960816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:22.813 [2024-12-13 03:22:23.960831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:22.813 passed 00:11:22.813 Test: blockdev nvme admin passthru ...passed 00:11:22.813 Test: blockdev copy ...passed 00:11:22.813 00:11:22.813 Run Summary: Type Total Ran Passed Failed Inactive 00:11:22.813 suites 1 1 n/a 0 0 00:11:22.813 tests 23 23 23 0 0 00:11:22.813 asserts 152 152 152 0 n/a 00:11:22.813 00:11:22.813 Elapsed time = 1.601 seconds 00:11:23.750 03:22:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:23.750 03:22:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.750 03:22:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:23.750 03:22:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.750 03:22:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:23.750 03:22:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:23.750 03:22:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:23.750 03:22:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:23.750 03:22:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:23.750 03:22:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:23.750 03:22:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:23.750 03:22:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:23.750 rmmod nvme_tcp 00:11:23.750 rmmod nvme_fabrics 00:11:24.009 rmmod nvme_keyring 00:11:24.009 03:22:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:24.009 03:22:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:24.009 03:22:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:24.009 03:22:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2561339 ']' 00:11:24.009 03:22:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2561339 00:11:24.009 03:22:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2561339 ']' 00:11:24.009 03:22:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2561339 00:11:24.009 03:22:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:24.009 03:22:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:24.009 03:22:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2561339 00:11:24.009 03:22:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:24.009 03:22:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:24.009 03:22:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2561339' 00:11:24.009 killing process with pid 2561339 00:11:24.009 03:22:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2561339 00:11:24.009 03:22:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2561339 00:11:25.564 03:22:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:25.564 03:22:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:25.564 03:22:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:25.564 03:22:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:25.564 03:22:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:25.564 03:22:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:25.564 03:22:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:25.564 03:22:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:25.564 03:22:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:25.564 03:22:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.564 03:22:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:25.564 03:22:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.472 03:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:27.472 00:11:27.472 real 0m12.899s 00:11:27.472 user 0m23.814s 00:11:27.472 sys 0m4.993s 00:11:27.472 03:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.472 03:22:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:27.472 ************************************ 00:11:27.472 END TEST nvmf_bdevio 00:11:27.472 ************************************ 00:11:27.472 03:22:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:27.472 00:11:27.472 real 5m2.623s 00:11:27.472 user 12m4.128s 00:11:27.472 sys 1m35.356s 00:11:27.472 03:22:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.472 03:22:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:27.472 ************************************ 00:11:27.472 END TEST nvmf_target_core 00:11:27.472 ************************************ 00:11:27.472 03:22:28 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:27.472 03:22:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:27.472 03:22:28 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.472 03:22:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:27.472 ************************************ 00:11:27.472 START TEST nvmf_target_extra 00:11:27.472 ************************************ 00:11:27.472 03:22:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:27.472 * Looking for test storage... 00:11:27.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:27.472 03:22:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:27.472 03:22:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:27.472 03:22:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:11:27.732 03:22:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:27.732 03:22:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:27.732 03:22:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:27.732 03:22:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:27.732 03:22:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:27.732 03:22:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:27.732 03:22:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:27.732 03:22:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:27.732 03:22:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:27.732 03:22:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:27.732 03:22:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:27.732 03:22:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:27.732 03:22:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:27.732 03:22:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:27.732 03:22:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:27.732 03:22:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:27.732 03:22:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:27.732 03:22:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:27.732 03:22:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:27.732 03:22:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:27.732 03:22:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:27.732 03:22:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:27.732 03:22:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:27.732 03:22:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:27.732 03:22:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:27.732 03:22:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:27.732 03:22:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:27.732 03:22:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:27.732 03:22:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:27.732 03:22:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:27.732 03:22:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:27.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.732 --rc genhtml_branch_coverage=1 00:11:27.732 --rc genhtml_function_coverage=1 00:11:27.732 --rc genhtml_legend=1 00:11:27.732 --rc geninfo_all_blocks=1 00:11:27.732 --rc geninfo_unexecuted_blocks=1 00:11:27.732 00:11:27.732 ' 00:11:27.732 03:22:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:27.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.732 --rc genhtml_branch_coverage=1 00:11:27.732 --rc genhtml_function_coverage=1 00:11:27.732 --rc genhtml_legend=1 00:11:27.732 --rc geninfo_all_blocks=1 00:11:27.732 --rc geninfo_unexecuted_blocks=1 00:11:27.732 00:11:27.732 ' 00:11:27.732 03:22:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:27.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.732 --rc genhtml_branch_coverage=1 00:11:27.732 --rc genhtml_function_coverage=1 00:11:27.732 --rc genhtml_legend=1 00:11:27.733 --rc geninfo_all_blocks=1 00:11:27.733 --rc geninfo_unexecuted_blocks=1 00:11:27.733 00:11:27.733 ' 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:27.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.733 --rc genhtml_branch_coverage=1 00:11:27.733 --rc genhtml_function_coverage=1 00:11:27.733 --rc genhtml_legend=1 00:11:27.733 --rc geninfo_all_blocks=1 00:11:27.733 --rc geninfo_unexecuted_blocks=1 00:11:27.733 00:11:27.733 ' 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:27.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:27.733 ************************************ 00:11:27.733 START TEST nvmf_example 00:11:27.733 ************************************ 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:27.733 * Looking for test storage... 00:11:27.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:27.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.733 --rc genhtml_branch_coverage=1 00:11:27.733 --rc genhtml_function_coverage=1 00:11:27.733 --rc genhtml_legend=1 00:11:27.733 --rc geninfo_all_blocks=1 00:11:27.733 --rc geninfo_unexecuted_blocks=1 00:11:27.733 00:11:27.733 ' 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:27.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.733 --rc genhtml_branch_coverage=1 00:11:27.733 --rc genhtml_function_coverage=1 00:11:27.733 --rc genhtml_legend=1 00:11:27.733 --rc geninfo_all_blocks=1 00:11:27.733 --rc geninfo_unexecuted_blocks=1 00:11:27.733 00:11:27.733 ' 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:27.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.733 --rc genhtml_branch_coverage=1 00:11:27.733 --rc genhtml_function_coverage=1 00:11:27.733 --rc genhtml_legend=1 00:11:27.733 --rc geninfo_all_blocks=1 00:11:27.733 --rc geninfo_unexecuted_blocks=1 00:11:27.733 00:11:27.733 ' 00:11:27.733 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:27.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.734 --rc genhtml_branch_coverage=1 00:11:27.734 --rc genhtml_function_coverage=1 00:11:27.734 --rc genhtml_legend=1 00:11:27.734 --rc geninfo_all_blocks=1 00:11:27.734 --rc geninfo_unexecuted_blocks=1 00:11:27.734 00:11:27.734 ' 00:11:27.734 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:27.734 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:27.734 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.734 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.734 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.734 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.734 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.734 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.734 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.734 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.734 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.734 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.734 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:27.734 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:27.734 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.734 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.734 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:27.734 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.734 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:27.734 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:27.993 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:27.993 03:22:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:33.270 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:33.270 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:33.270 Found net devices under 0000:af:00.0: cvl_0_0 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:33.270 Found net devices under 0000:af:00.1: cvl_0_1 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:33.270 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:33.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:33.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:11:33.270 00:11:33.271 --- 10.0.0.2 ping statistics --- 00:11:33.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.271 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:11:33.271 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:33.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:33.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:11:33.271 00:11:33.271 --- 10.0.0.1 ping statistics --- 00:11:33.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.271 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:11:33.271 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:33.271 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:33.271 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:33.271 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:33.271 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:33.271 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:33.271 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:33.271 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:33.271 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:33.530 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:33.530 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:33.530 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:33.530 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:33.530 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:33.530 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:33.530 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2565742 00:11:33.530 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:33.530 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:33.530 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2565742 00:11:33.530 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2565742 ']' 00:11:33.530 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.530 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:33.530 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.530 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:33.530 03:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:34.466 03:22:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:34.466 03:22:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:34.466 03:22:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:34.466 03:22:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:34.466 03:22:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:34.466 03:22:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:34.466 03:22:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.466 03:22:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:34.466 03:22:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.466 03:22:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:34.466 03:22:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.466 03:22:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:34.466 03:22:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.466 03:22:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:34.466 03:22:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:34.466 03:22:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.466 03:22:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:34.466 03:22:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.466 03:22:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:34.466 03:22:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:34.466 03:22:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.466 03:22:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:34.466 03:22:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.466 03:22:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:34.466 03:22:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.466 03:22:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:34.466 03:22:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.466 03:22:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:34.466 03:22:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:46.674 Initializing NVMe Controllers 00:11:46.674 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:46.674 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:46.674 Initialization complete. Launching workers. 00:11:46.674 ======================================================== 00:11:46.674 Latency(us) 00:11:46.674 Device Information : IOPS MiB/s Average min max 00:11:46.674 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16636.30 64.99 3848.49 808.06 15918.09 00:11:46.674 ======================================================== 00:11:46.674 Total : 16636.30 64.99 3848.49 808.06 15918.09 00:11:46.674 00:11:46.674 03:22:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:46.674 03:22:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:46.674 03:22:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:46.674 03:22:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:46.674 03:22:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:46.674 03:22:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:46.674 03:22:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:46.674 03:22:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:46.674 rmmod nvme_tcp 00:11:46.674 rmmod nvme_fabrics 00:11:46.674 rmmod nvme_keyring 00:11:46.674 03:22:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:46.674 03:22:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:46.674 03:22:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:46.674 03:22:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2565742 ']' 00:11:46.674 03:22:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2565742 00:11:46.674 03:22:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2565742 ']' 00:11:46.674 03:22:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2565742 00:11:46.674 03:22:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:46.674 03:22:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:46.674 03:22:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2565742 00:11:46.674 03:22:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:46.674 03:22:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:46.674 03:22:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2565742' 00:11:46.674 killing process with pid 2565742 00:11:46.674 03:22:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2565742 00:11:46.674 03:22:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2565742 00:11:46.674 nvmf threads initialize successfully 00:11:46.674 bdev subsystem init successfully 00:11:46.674 created a nvmf target service 00:11:46.674 create targets's poll groups done 00:11:46.674 all subsystems of target started 00:11:46.674 nvmf target is running 00:11:46.674 all subsystems of target stopped 00:11:46.674 destroy targets's poll groups done 00:11:46.674 destroyed the nvmf target service 00:11:46.674 bdev subsystem finish successfully 00:11:46.674 nvmf threads destroy successfully 00:11:46.674 03:22:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:46.674 03:22:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:46.674 03:22:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:46.674 03:22:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:46.674 03:22:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:46.674 03:22:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:46.674 03:22:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:46.674 03:22:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:46.674 03:22:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:46.674 03:22:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.674 03:22:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:46.674 03:22:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:48.584 00:11:48.584 real 0m20.690s 00:11:48.584 user 0m50.209s 00:11:48.584 sys 0m5.697s 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:48.584 ************************************ 00:11:48.584 END TEST nvmf_example 00:11:48.584 ************************************ 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:48.584 ************************************ 00:11:48.584 START TEST nvmf_filesystem 00:11:48.584 ************************************ 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:48.584 * Looking for test storage... 00:11:48.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:48.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.584 --rc genhtml_branch_coverage=1 00:11:48.584 --rc genhtml_function_coverage=1 00:11:48.584 --rc genhtml_legend=1 00:11:48.584 --rc geninfo_all_blocks=1 00:11:48.584 --rc geninfo_unexecuted_blocks=1 00:11:48.584 00:11:48.584 ' 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:48.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.584 --rc genhtml_branch_coverage=1 00:11:48.584 --rc genhtml_function_coverage=1 00:11:48.584 --rc genhtml_legend=1 00:11:48.584 --rc geninfo_all_blocks=1 00:11:48.584 --rc geninfo_unexecuted_blocks=1 00:11:48.584 00:11:48.584 ' 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:48.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.584 --rc genhtml_branch_coverage=1 00:11:48.584 --rc genhtml_function_coverage=1 00:11:48.584 --rc genhtml_legend=1 00:11:48.584 --rc geninfo_all_blocks=1 00:11:48.584 --rc geninfo_unexecuted_blocks=1 00:11:48.584 00:11:48.584 ' 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:48.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.584 --rc genhtml_branch_coverage=1 00:11:48.584 --rc genhtml_function_coverage=1 00:11:48.584 --rc genhtml_legend=1 00:11:48.584 --rc geninfo_all_blocks=1 00:11:48.584 --rc geninfo_unexecuted_blocks=1 00:11:48.584 00:11:48.584 ' 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:48.584 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:48.585 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:48.585 #define SPDK_CONFIG_H 00:11:48.585 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:48.585 #define SPDK_CONFIG_APPS 1 00:11:48.585 #define SPDK_CONFIG_ARCH native 00:11:48.585 #define SPDK_CONFIG_ASAN 1 00:11:48.585 #undef SPDK_CONFIG_AVAHI 00:11:48.585 #undef SPDK_CONFIG_CET 00:11:48.585 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:48.585 #define SPDK_CONFIG_COVERAGE 1 00:11:48.585 #define SPDK_CONFIG_CROSS_PREFIX 00:11:48.585 #undef SPDK_CONFIG_CRYPTO 00:11:48.585 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:48.585 #undef SPDK_CONFIG_CUSTOMOCF 00:11:48.585 #undef SPDK_CONFIG_DAOS 00:11:48.585 #define SPDK_CONFIG_DAOS_DIR 00:11:48.585 #define SPDK_CONFIG_DEBUG 1 00:11:48.585 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:48.586 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:48.586 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:48.586 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:48.586 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:48.586 #undef SPDK_CONFIG_DPDK_UADK 00:11:48.586 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:48.586 #define SPDK_CONFIG_EXAMPLES 1 00:11:48.586 #undef SPDK_CONFIG_FC 00:11:48.586 #define SPDK_CONFIG_FC_PATH 00:11:48.586 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:48.586 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:48.586 #define SPDK_CONFIG_FSDEV 1 00:11:48.586 #undef SPDK_CONFIG_FUSE 00:11:48.586 #undef SPDK_CONFIG_FUZZER 00:11:48.586 #define SPDK_CONFIG_FUZZER_LIB 00:11:48.586 #undef SPDK_CONFIG_GOLANG 00:11:48.586 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:48.586 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:48.586 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:48.586 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:48.586 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:48.586 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:48.586 #undef SPDK_CONFIG_HAVE_LZ4 00:11:48.586 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:48.586 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:48.586 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:48.586 #define SPDK_CONFIG_IDXD 1 00:11:48.586 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:48.586 #undef SPDK_CONFIG_IPSEC_MB 00:11:48.586 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:48.586 #define SPDK_CONFIG_ISAL 1 00:11:48.586 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:48.586 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:48.586 #define SPDK_CONFIG_LIBDIR 00:11:48.586 #undef SPDK_CONFIG_LTO 00:11:48.586 #define SPDK_CONFIG_MAX_LCORES 128 00:11:48.586 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:48.586 #define SPDK_CONFIG_NVME_CUSE 1 00:11:48.586 #undef SPDK_CONFIG_OCF 00:11:48.586 #define SPDK_CONFIG_OCF_PATH 00:11:48.586 #define SPDK_CONFIG_OPENSSL_PATH 00:11:48.586 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:48.586 #define SPDK_CONFIG_PGO_DIR 00:11:48.586 #undef SPDK_CONFIG_PGO_USE 00:11:48.586 #define SPDK_CONFIG_PREFIX /usr/local 00:11:48.586 #undef SPDK_CONFIG_RAID5F 00:11:48.586 #undef SPDK_CONFIG_RBD 00:11:48.586 #define SPDK_CONFIG_RDMA 1 00:11:48.586 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:48.586 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:48.586 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:48.586 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:48.586 #define SPDK_CONFIG_SHARED 1 00:11:48.586 #undef SPDK_CONFIG_SMA 00:11:48.586 #define SPDK_CONFIG_TESTS 1 00:11:48.586 #undef SPDK_CONFIG_TSAN 00:11:48.586 #define SPDK_CONFIG_UBLK 1 00:11:48.586 #define SPDK_CONFIG_UBSAN 1 00:11:48.586 #undef SPDK_CONFIG_UNIT_TESTS 00:11:48.586 #undef SPDK_CONFIG_URING 00:11:48.586 #define SPDK_CONFIG_URING_PATH 00:11:48.586 #undef SPDK_CONFIG_URING_ZNS 00:11:48.586 #undef SPDK_CONFIG_USDT 00:11:48.586 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:48.586 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:48.586 #undef SPDK_CONFIG_VFIO_USER 00:11:48.586 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:48.586 #define SPDK_CONFIG_VHOST 1 00:11:48.586 #define SPDK_CONFIG_VIRTIO 1 00:11:48.586 #undef SPDK_CONFIG_VTUNE 00:11:48.586 #define SPDK_CONFIG_VTUNE_DIR 00:11:48.586 #define SPDK_CONFIG_WERROR 1 00:11:48.586 #define SPDK_CONFIG_WPDK_DIR 00:11:48.586 #undef SPDK_CONFIG_XNVME 00:11:48.586 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:48.586 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:48.587 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:48.588 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:48.589 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:48.589 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:48.589 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:48.589 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:48.589 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:48.589 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:48.589 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:48.589 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:48.589 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:48.589 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:48.589 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:48.589 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:11:48.589 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:48.589 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:48.589 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:48.589 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:48.589 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:48.589 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:48.589 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2568307 ]] 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2568307 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.TAoCvW 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.TAoCvW/tests/target /tmp/spdk.TAoCvW 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=722997248 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=4561432576 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=88965890048 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=95552401408 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6586511360 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47764832256 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47776198656 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19087462400 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19110481920 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23019520 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47775834112 00:11:48.849 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47776202752 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=368640 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=9555226624 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=9555238912 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:48.850 * Looking for test storage... 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=88965890048 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8801103872 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:48.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.850 --rc genhtml_branch_coverage=1 00:11:48.850 --rc genhtml_function_coverage=1 00:11:48.850 --rc genhtml_legend=1 00:11:48.850 --rc geninfo_all_blocks=1 00:11:48.850 --rc geninfo_unexecuted_blocks=1 00:11:48.850 00:11:48.850 ' 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:48.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.850 --rc genhtml_branch_coverage=1 00:11:48.850 --rc genhtml_function_coverage=1 00:11:48.850 --rc genhtml_legend=1 00:11:48.850 --rc geninfo_all_blocks=1 00:11:48.850 --rc geninfo_unexecuted_blocks=1 00:11:48.850 00:11:48.850 ' 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:48.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.850 --rc genhtml_branch_coverage=1 00:11:48.850 --rc genhtml_function_coverage=1 00:11:48.850 --rc genhtml_legend=1 00:11:48.850 --rc geninfo_all_blocks=1 00:11:48.850 --rc geninfo_unexecuted_blocks=1 00:11:48.850 00:11:48.850 ' 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:48.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.850 --rc genhtml_branch_coverage=1 00:11:48.850 --rc genhtml_function_coverage=1 00:11:48.850 --rc genhtml_legend=1 00:11:48.850 --rc geninfo_all_blocks=1 00:11:48.850 --rc geninfo_unexecuted_blocks=1 00:11:48.850 00:11:48.850 ' 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:48.850 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:48.851 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:48.851 03:22:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:54.126 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:54.126 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:54.126 Found net devices under 0000:af:00.0: cvl_0_0 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:54.126 Found net devices under 0000:af:00.1: cvl_0_1 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:54.126 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:54.385 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:54.385 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:54.385 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:54.385 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:54.385 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:54.385 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:54.385 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:54.385 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:54.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:54.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:11:54.385 00:11:54.385 --- 10.0.0.2 ping statistics --- 00:11:54.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.385 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:11:54.385 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:54.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:54.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:11:54.385 00:11:54.385 --- 10.0.0.1 ping statistics --- 00:11:54.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.385 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:11:54.385 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:54.385 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:54.385 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:54.385 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:54.385 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:54.385 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:54.385 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:54.385 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:54.385 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:54.385 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:54.385 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:54.385 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:54.385 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:54.385 ************************************ 00:11:54.385 START TEST nvmf_filesystem_no_in_capsule 00:11:54.385 ************************************ 00:11:54.385 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:54.385 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:54.386 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:54.386 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:54.386 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:54.386 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.386 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2571296 00:11:54.386 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2571296 00:11:54.386 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:54.386 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2571296 ']' 00:11:54.386 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.386 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:54.386 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.386 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:54.386 03:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.645 [2024-12-13 03:22:55.631826] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:54.645 [2024-12-13 03:22:55.631921] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:54.645 [2024-12-13 03:22:55.748337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:54.904 [2024-12-13 03:22:55.858837] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:54.904 [2024-12-13 03:22:55.858881] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:54.904 [2024-12-13 03:22:55.858891] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:54.904 [2024-12-13 03:22:55.858901] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:54.904 [2024-12-13 03:22:55.858909] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:54.904 [2024-12-13 03:22:55.861335] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:54.904 [2024-12-13 03:22:55.861411] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:54.904 [2024-12-13 03:22:55.861514] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.904 [2024-12-13 03:22:55.861524] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:55.472 03:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:55.472 03:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:55.472 03:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:55.472 03:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:55.472 03:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:55.472 03:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:55.472 03:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:55.472 03:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:55.472 03:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.472 03:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:55.472 [2024-12-13 03:22:56.480875] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:55.472 03:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.472 03:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:55.472 03:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.472 03:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.039 Malloc1 00:11:56.039 03:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.039 03:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:56.039 03:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.039 03:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.039 03:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.039 03:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:56.039 03:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.039 03:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.039 03:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.039 03:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:56.039 03:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.039 03:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.039 [2024-12-13 03:22:57.078939] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.039 03:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.039 03:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:56.039 03:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:56.039 03:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:56.039 03:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:56.039 03:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:56.039 03:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:56.039 03:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.039 03:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.040 03:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.040 03:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:56.040 { 00:11:56.040 "name": "Malloc1", 00:11:56.040 "aliases": [ 00:11:56.040 "6244800e-ce20-4e94-98ad-16a4d819bb53" 00:11:56.040 ], 00:11:56.040 "product_name": "Malloc disk", 00:11:56.040 "block_size": 512, 00:11:56.040 "num_blocks": 1048576, 00:11:56.040 "uuid": "6244800e-ce20-4e94-98ad-16a4d819bb53", 00:11:56.040 "assigned_rate_limits": { 00:11:56.040 "rw_ios_per_sec": 0, 00:11:56.040 "rw_mbytes_per_sec": 0, 00:11:56.040 "r_mbytes_per_sec": 0, 00:11:56.040 "w_mbytes_per_sec": 0 00:11:56.040 }, 00:11:56.040 "claimed": true, 00:11:56.040 "claim_type": "exclusive_write", 00:11:56.040 "zoned": false, 00:11:56.040 "supported_io_types": { 00:11:56.040 "read": true, 00:11:56.040 "write": true, 00:11:56.040 "unmap": true, 00:11:56.040 "flush": true, 00:11:56.040 "reset": true, 00:11:56.040 "nvme_admin": false, 00:11:56.040 "nvme_io": false, 00:11:56.040 "nvme_io_md": false, 00:11:56.040 "write_zeroes": true, 00:11:56.040 "zcopy": true, 00:11:56.040 "get_zone_info": false, 00:11:56.040 "zone_management": false, 00:11:56.040 "zone_append": false, 00:11:56.040 "compare": false, 00:11:56.040 "compare_and_write": false, 00:11:56.040 "abort": true, 00:11:56.040 "seek_hole": false, 00:11:56.040 "seek_data": false, 00:11:56.040 "copy": true, 00:11:56.040 "nvme_iov_md": false 00:11:56.040 }, 00:11:56.040 "memory_domains": [ 00:11:56.040 { 00:11:56.040 "dma_device_id": "system", 00:11:56.040 "dma_device_type": 1 00:11:56.040 }, 00:11:56.040 { 00:11:56.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.040 "dma_device_type": 2 00:11:56.040 } 00:11:56.040 ], 00:11:56.040 "driver_specific": {} 00:11:56.040 } 00:11:56.040 ]' 00:11:56.040 03:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:56.040 03:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:56.040 03:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:56.040 03:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:56.040 03:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:56.040 03:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:56.040 03:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:56.040 03:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:57.417 03:22:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:57.417 03:22:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:57.417 03:22:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:57.417 03:22:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:57.417 03:22:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:59.321 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:59.321 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:59.321 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:59.321 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:59.321 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:59.321 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:59.322 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:59.322 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:59.322 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:59.322 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:59.322 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:59.322 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:59.322 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:59.322 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:59.322 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:59.322 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:59.322 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:59.580 03:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:00.515 03:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:01.453 03:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:01.453 03:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:01.453 03:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:01.453 03:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.453 03:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.453 ************************************ 00:12:01.453 START TEST filesystem_ext4 00:12:01.453 ************************************ 00:12:01.453 03:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:01.453 03:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:01.453 03:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:01.453 03:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:01.453 03:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:01.453 03:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:01.453 03:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:01.453 03:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:01.453 03:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:01.453 03:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:01.453 03:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:01.453 mke2fs 1.47.0 (5-Feb-2023) 00:12:01.712 Discarding device blocks: 0/522240 done 00:12:01.712 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:01.712 Filesystem UUID: 4214b156-8979-4a11-969a-0f3ee05b93b2 00:12:01.712 Superblock backups stored on blocks: 00:12:01.712 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:01.712 00:12:01.712 Allocating group tables: 0/64 done 00:12:01.712 Writing inode tables: 0/64 done 00:12:01.712 Creating journal (8192 blocks): done 00:12:01.712 Writing superblocks and filesystem accounting information: 0/64 done 00:12:01.712 00:12:01.712 03:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:01.712 03:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:08.280 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:08.280 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:08.280 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:08.280 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:08.280 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:08.280 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:08.280 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2571296 00:12:08.280 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:08.280 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:08.280 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:08.280 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:08.280 00:12:08.280 real 0m5.793s 00:12:08.280 user 0m0.024s 00:12:08.280 sys 0m0.071s 00:12:08.280 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.280 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:08.280 ************************************ 00:12:08.280 END TEST filesystem_ext4 00:12:08.280 ************************************ 00:12:08.280 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:08.280 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:08.280 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.280 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.280 ************************************ 00:12:08.280 START TEST filesystem_btrfs 00:12:08.280 ************************************ 00:12:08.280 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:08.280 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:08.280 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:08.280 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:08.280 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:08.280 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:08.280 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:08.280 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:08.280 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:08.280 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:08.280 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:08.280 btrfs-progs v6.8.1 00:12:08.280 See https://btrfs.readthedocs.io for more information. 00:12:08.280 00:12:08.280 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:08.280 NOTE: several default settings have changed in version 5.15, please make sure 00:12:08.280 this does not affect your deployments: 00:12:08.280 - DUP for metadata (-m dup) 00:12:08.280 - enabled no-holes (-O no-holes) 00:12:08.280 - enabled free-space-tree (-R free-space-tree) 00:12:08.280 00:12:08.280 Label: (null) 00:12:08.280 UUID: fbbe9146-105b-4542-bc70-42b31c9f5147 00:12:08.280 Node size: 16384 00:12:08.280 Sector size: 4096 (CPU page size: 4096) 00:12:08.280 Filesystem size: 510.00MiB 00:12:08.280 Block group profiles: 00:12:08.280 Data: single 8.00MiB 00:12:08.280 Metadata: DUP 32.00MiB 00:12:08.280 System: DUP 8.00MiB 00:12:08.280 SSD detected: yes 00:12:08.280 Zoned device: no 00:12:08.280 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:08.280 Checksum: crc32c 00:12:08.280 Number of devices: 1 00:12:08.280 Devices: 00:12:08.280 ID SIZE PATH 00:12:08.280 1 510.00MiB /dev/nvme0n1p1 00:12:08.280 00:12:08.280 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:08.280 03:23:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:08.280 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:08.280 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:08.280 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:08.280 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:08.280 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:08.280 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:08.280 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2571296 00:12:08.280 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:08.280 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:08.280 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:08.280 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:08.280 00:12:08.280 real 0m0.682s 00:12:08.280 user 0m0.032s 00:12:08.280 sys 0m0.110s 00:12:08.280 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.280 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:08.280 ************************************ 00:12:08.280 END TEST filesystem_btrfs 00:12:08.280 ************************************ 00:12:08.280 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:08.280 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:08.280 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.280 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:08.280 ************************************ 00:12:08.280 START TEST filesystem_xfs 00:12:08.280 ************************************ 00:12:08.280 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:08.281 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:08.281 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:08.281 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:08.281 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:08.281 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:08.281 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:08.281 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:08.281 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:08.281 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:08.281 03:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:08.542 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:08.542 = sectsz=512 attr=2, projid32bit=1 00:12:08.542 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:08.542 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:08.542 data = bsize=4096 blocks=130560, imaxpct=25 00:12:08.542 = sunit=0 swidth=0 blks 00:12:08.542 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:08.542 log =internal log bsize=4096 blocks=16384, version=2 00:12:08.542 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:08.542 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:09.479 Discarding blocks...Done. 00:12:09.479 03:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:09.479 03:23:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:12.014 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:12.014 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:12.014 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:12.014 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:12.014 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:12.014 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:12.014 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2571296 00:12:12.014 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:12.014 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:12.014 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:12.014 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:12.014 00:12:12.014 real 0m3.760s 00:12:12.014 user 0m0.031s 00:12:12.014 sys 0m0.069s 00:12:12.014 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.014 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:12.014 ************************************ 00:12:12.014 END TEST filesystem_xfs 00:12:12.014 ************************************ 00:12:12.014 03:23:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:12.014 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:12.014 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:12.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.014 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:12.014 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:12.014 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:12.014 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:12.014 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:12.014 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:12.014 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:12.014 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:12.014 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.014 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.014 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.014 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:12.014 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2571296 00:12:12.014 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2571296 ']' 00:12:12.014 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2571296 00:12:12.014 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:12.014 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:12.014 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2571296 00:12:12.273 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:12.273 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:12.273 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2571296' 00:12:12.273 killing process with pid 2571296 00:12:12.273 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2571296 00:12:12.273 03:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2571296 00:12:14.808 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:14.808 00:12:14.808 real 0m20.295s 00:12:14.808 user 1m18.522s 00:12:14.808 sys 0m1.565s 00:12:14.808 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.808 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.808 ************************************ 00:12:14.808 END TEST nvmf_filesystem_no_in_capsule 00:12:14.808 ************************************ 00:12:14.808 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:14.808 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:14.808 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.808 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:14.808 ************************************ 00:12:14.808 START TEST nvmf_filesystem_in_capsule 00:12:14.808 ************************************ 00:12:14.808 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:14.808 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:14.808 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:14.808 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:14.808 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:14.808 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.808 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2574869 00:12:14.808 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2574869 00:12:14.808 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:14.808 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2574869 ']' 00:12:14.809 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.809 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:14.809 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.809 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:14.809 03:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.809 [2024-12-13 03:23:16.003304] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:14.809 [2024-12-13 03:23:16.003410] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.068 [2024-12-13 03:23:16.121083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:15.068 [2024-12-13 03:23:16.229480] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.068 [2024-12-13 03:23:16.229527] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.068 [2024-12-13 03:23:16.229537] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:15.068 [2024-12-13 03:23:16.229548] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:15.068 [2024-12-13 03:23:16.229556] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.068 [2024-12-13 03:23:16.231852] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.068 [2024-12-13 03:23:16.231935] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.068 [2024-12-13 03:23:16.231991] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.068 [2024-12-13 03:23:16.232000] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:15.636 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:15.636 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:15.636 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:15.636 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:15.636 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:15.895 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.895 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:15.895 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:15.895 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.895 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:15.895 [2024-12-13 03:23:16.861272] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:15.895 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.895 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:15.895 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.895 03:23:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.464 Malloc1 00:12:16.464 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.464 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:16.464 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.464 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.464 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.464 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:16.464 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.464 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.464 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.464 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.464 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.464 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.464 [2024-12-13 03:23:17.440489] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.464 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.464 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:16.464 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:16.464 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:16.464 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:16.464 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:16.464 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:16.464 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.464 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.464 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.464 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:16.464 { 00:12:16.464 "name": "Malloc1", 00:12:16.464 "aliases": [ 00:12:16.464 "20829a2d-d45d-48f0-b467-13a778af15cb" 00:12:16.464 ], 00:12:16.464 "product_name": "Malloc disk", 00:12:16.464 "block_size": 512, 00:12:16.464 "num_blocks": 1048576, 00:12:16.464 "uuid": "20829a2d-d45d-48f0-b467-13a778af15cb", 00:12:16.464 "assigned_rate_limits": { 00:12:16.464 "rw_ios_per_sec": 0, 00:12:16.464 "rw_mbytes_per_sec": 0, 00:12:16.464 "r_mbytes_per_sec": 0, 00:12:16.464 "w_mbytes_per_sec": 0 00:12:16.464 }, 00:12:16.464 "claimed": true, 00:12:16.464 "claim_type": "exclusive_write", 00:12:16.464 "zoned": false, 00:12:16.464 "supported_io_types": { 00:12:16.464 "read": true, 00:12:16.464 "write": true, 00:12:16.464 "unmap": true, 00:12:16.464 "flush": true, 00:12:16.464 "reset": true, 00:12:16.464 "nvme_admin": false, 00:12:16.464 "nvme_io": false, 00:12:16.464 "nvme_io_md": false, 00:12:16.464 "write_zeroes": true, 00:12:16.464 "zcopy": true, 00:12:16.464 "get_zone_info": false, 00:12:16.464 "zone_management": false, 00:12:16.464 "zone_append": false, 00:12:16.464 "compare": false, 00:12:16.464 "compare_and_write": false, 00:12:16.464 "abort": true, 00:12:16.464 "seek_hole": false, 00:12:16.464 "seek_data": false, 00:12:16.464 "copy": true, 00:12:16.464 "nvme_iov_md": false 00:12:16.464 }, 00:12:16.464 "memory_domains": [ 00:12:16.464 { 00:12:16.464 "dma_device_id": "system", 00:12:16.464 "dma_device_type": 1 00:12:16.464 }, 00:12:16.464 { 00:12:16.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.464 "dma_device_type": 2 00:12:16.464 } 00:12:16.464 ], 00:12:16.464 "driver_specific": {} 00:12:16.464 } 00:12:16.464 ]' 00:12:16.464 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:16.464 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:16.464 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:16.464 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:16.464 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:16.464 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:16.464 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:16.464 03:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:17.842 03:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:17.842 03:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:17.842 03:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:17.842 03:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:17.842 03:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:19.745 03:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:19.745 03:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:19.745 03:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:19.745 03:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:19.745 03:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:19.745 03:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:19.745 03:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:19.745 03:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:19.745 03:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:19.745 03:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:19.745 03:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:19.745 03:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:19.745 03:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:19.745 03:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:19.745 03:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:19.745 03:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:19.745 03:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:19.745 03:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:20.313 03:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:21.251 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:21.251 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:21.251 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:21.251 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.251 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:21.251 ************************************ 00:12:21.251 START TEST filesystem_in_capsule_ext4 00:12:21.251 ************************************ 00:12:21.251 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:21.251 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:21.251 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:21.251 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:21.251 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:21.251 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:21.251 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:21.251 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:21.251 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:21.251 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:21.251 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:21.251 mke2fs 1.47.0 (5-Feb-2023) 00:12:21.251 Discarding device blocks: 0/522240 done 00:12:21.251 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:21.251 Filesystem UUID: 4aefc413-4620-433e-bc20-551796809221 00:12:21.251 Superblock backups stored on blocks: 00:12:21.251 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:21.251 00:12:21.251 Allocating group tables: 0/64 done 00:12:21.251 Writing inode tables: 0/64 done 00:12:21.819 Creating journal (8192 blocks): done 00:12:21.819 Writing superblocks and filesystem accounting information: 0/64 done 00:12:21.819 00:12:21.819 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:21.819 03:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:27.090 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:27.349 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:27.349 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:27.349 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:27.349 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:27.349 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:27.349 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2574869 00:12:27.349 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:27.349 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:27.349 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:27.349 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:27.349 00:12:27.349 real 0m6.113s 00:12:27.349 user 0m0.029s 00:12:27.349 sys 0m0.069s 00:12:27.349 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:27.349 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:27.349 ************************************ 00:12:27.349 END TEST filesystem_in_capsule_ext4 00:12:27.349 ************************************ 00:12:27.349 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:27.349 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:27.349 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:27.349 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:27.349 ************************************ 00:12:27.349 START TEST filesystem_in_capsule_btrfs 00:12:27.349 ************************************ 00:12:27.349 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:27.349 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:27.349 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:27.349 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:27.349 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:27.349 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:27.349 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:27.349 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:27.349 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:27.349 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:27.349 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:27.608 btrfs-progs v6.8.1 00:12:27.608 See https://btrfs.readthedocs.io for more information. 00:12:27.608 00:12:27.608 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:27.608 NOTE: several default settings have changed in version 5.15, please make sure 00:12:27.608 this does not affect your deployments: 00:12:27.608 - DUP for metadata (-m dup) 00:12:27.608 - enabled no-holes (-O no-holes) 00:12:27.608 - enabled free-space-tree (-R free-space-tree) 00:12:27.608 00:12:27.608 Label: (null) 00:12:27.608 UUID: 88f26f91-8f5e-4852-9ae3-7d206be6c21f 00:12:27.608 Node size: 16384 00:12:27.608 Sector size: 4096 (CPU page size: 4096) 00:12:27.608 Filesystem size: 510.00MiB 00:12:27.608 Block group profiles: 00:12:27.608 Data: single 8.00MiB 00:12:27.608 Metadata: DUP 32.00MiB 00:12:27.608 System: DUP 8.00MiB 00:12:27.608 SSD detected: yes 00:12:27.608 Zoned device: no 00:12:27.608 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:27.608 Checksum: crc32c 00:12:27.608 Number of devices: 1 00:12:27.608 Devices: 00:12:27.608 ID SIZE PATH 00:12:27.608 1 510.00MiB /dev/nvme0n1p1 00:12:27.608 00:12:27.608 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:27.608 03:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:28.176 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:28.176 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:28.177 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:28.177 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:28.177 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:28.177 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:28.177 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2574869 00:12:28.177 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:28.177 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:28.177 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:28.177 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:28.177 00:12:28.177 real 0m0.755s 00:12:28.177 user 0m0.030s 00:12:28.177 sys 0m0.110s 00:12:28.177 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.177 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:28.177 ************************************ 00:12:28.177 END TEST filesystem_in_capsule_btrfs 00:12:28.177 ************************************ 00:12:28.177 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:28.177 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:28.177 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.177 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:28.177 ************************************ 00:12:28.177 START TEST filesystem_in_capsule_xfs 00:12:28.177 ************************************ 00:12:28.177 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:28.177 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:28.177 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:28.177 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:28.177 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:28.177 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:28.177 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:28.177 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:28.177 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:28.177 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:28.177 03:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:28.177 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:28.177 = sectsz=512 attr=2, projid32bit=1 00:12:28.177 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:28.177 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:28.177 data = bsize=4096 blocks=130560, imaxpct=25 00:12:28.177 = sunit=0 swidth=0 blks 00:12:28.177 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:28.177 log =internal log bsize=4096 blocks=16384, version=2 00:12:28.177 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:28.177 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:29.114 Discarding blocks...Done. 00:12:29.114 03:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:29.114 03:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:31.122 03:23:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:31.122 03:23:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:31.122 03:23:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:31.122 03:23:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:31.122 03:23:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:31.122 03:23:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:31.122 03:23:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2574869 00:12:31.122 03:23:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:31.122 03:23:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:31.122 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:31.122 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:31.122 00:12:31.122 real 0m2.727s 00:12:31.122 user 0m0.030s 00:12:31.122 sys 0m0.068s 00:12:31.122 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.122 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:31.122 ************************************ 00:12:31.122 END TEST filesystem_in_capsule_xfs 00:12:31.122 ************************************ 00:12:31.122 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:31.122 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:31.122 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:31.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.381 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:31.381 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:31.381 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:31.381 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.640 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.640 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:31.640 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:31.640 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:31.640 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.640 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:31.640 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.640 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:31.640 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2574869 00:12:31.640 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2574869 ']' 00:12:31.640 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2574869 00:12:31.640 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:31.640 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:31.640 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2574869 00:12:31.640 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:31.640 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:31.640 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2574869' 00:12:31.640 killing process with pid 2574869 00:12:31.640 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2574869 00:12:31.640 03:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2574869 00:12:34.175 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:34.175 00:12:34.175 real 0m19.384s 00:12:34.175 user 1m14.886s 00:12:34.175 sys 0m1.539s 00:12:34.175 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:34.175 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:34.175 ************************************ 00:12:34.175 END TEST nvmf_filesystem_in_capsule 00:12:34.175 ************************************ 00:12:34.175 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:34.175 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:34.175 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:34.175 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:34.175 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:34.175 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:34.175 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:34.175 rmmod nvme_tcp 00:12:34.175 rmmod nvme_fabrics 00:12:34.175 rmmod nvme_keyring 00:12:34.433 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:34.433 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:34.433 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:34.433 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:34.433 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:34.433 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:34.433 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:34.433 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:34.433 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:34.433 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:34.433 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:34.433 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:34.433 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:34.433 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.433 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:34.433 03:23:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.338 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:36.338 00:12:36.338 real 0m47.942s 00:12:36.338 user 2m35.347s 00:12:36.338 sys 0m7.373s 00:12:36.338 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:36.339 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:36.339 ************************************ 00:12:36.339 END TEST nvmf_filesystem 00:12:36.339 ************************************ 00:12:36.339 03:23:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:36.339 03:23:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:36.339 03:23:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:36.339 03:23:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:36.339 ************************************ 00:12:36.339 START TEST nvmf_target_discovery 00:12:36.339 ************************************ 00:12:36.339 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:36.599 * Looking for test storage... 00:12:36.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:36.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.599 --rc genhtml_branch_coverage=1 00:12:36.599 --rc genhtml_function_coverage=1 00:12:36.599 --rc genhtml_legend=1 00:12:36.599 --rc geninfo_all_blocks=1 00:12:36.599 --rc geninfo_unexecuted_blocks=1 00:12:36.599 00:12:36.599 ' 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:36.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.599 --rc genhtml_branch_coverage=1 00:12:36.599 --rc genhtml_function_coverage=1 00:12:36.599 --rc genhtml_legend=1 00:12:36.599 --rc geninfo_all_blocks=1 00:12:36.599 --rc geninfo_unexecuted_blocks=1 00:12:36.599 00:12:36.599 ' 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:36.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.599 --rc genhtml_branch_coverage=1 00:12:36.599 --rc genhtml_function_coverage=1 00:12:36.599 --rc genhtml_legend=1 00:12:36.599 --rc geninfo_all_blocks=1 00:12:36.599 --rc geninfo_unexecuted_blocks=1 00:12:36.599 00:12:36.599 ' 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:36.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.599 --rc genhtml_branch_coverage=1 00:12:36.599 --rc genhtml_function_coverage=1 00:12:36.599 --rc genhtml_legend=1 00:12:36.599 --rc geninfo_all_blocks=1 00:12:36.599 --rc geninfo_unexecuted_blocks=1 00:12:36.599 00:12:36.599 ' 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.599 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.600 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.600 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.600 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:36.600 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.600 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:36.600 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:36.600 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:36.600 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.600 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.600 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.600 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:36.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:36.600 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:36.600 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:36.600 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:36.600 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:36.600 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:36.600 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:36.600 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:36.600 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:36.600 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:36.600 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.600 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:36.600 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:36.600 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:36.600 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.600 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:36.600 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.600 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:36.600 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:36.600 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:36.600 03:23:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:43.176 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:43.176 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:43.176 Found net devices under 0000:af:00.0: cvl_0_0 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:43.176 Found net devices under 0000:af:00.1: cvl_0_1 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:43.176 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:43.177 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:43.177 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:43.177 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:43.177 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:43.177 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:43.177 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:43.177 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:43.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:43.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:12:43.177 00:12:43.177 --- 10.0.0.2 ping statistics --- 00:12:43.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.177 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:12:43.177 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:43.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:43.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:12:43.177 00:12:43.177 --- 10.0.0.1 ping statistics --- 00:12:43.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.177 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:12:43.177 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:43.177 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:43.177 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:43.177 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:43.177 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:43.177 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:43.177 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:43.177 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:43.177 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:43.177 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:43.177 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:43.177 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:43.177 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.177 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2581688 00:12:43.177 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:43.177 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2581688 00:12:43.177 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2581688 ']' 00:12:43.177 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.177 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:43.177 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.177 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:43.177 03:23:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.177 [2024-12-13 03:23:43.652137] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:43.177 [2024-12-13 03:23:43.652223] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:43.177 [2024-12-13 03:23:43.769870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:43.177 [2024-12-13 03:23:43.884160] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:43.177 [2024-12-13 03:23:43.884201] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:43.177 [2024-12-13 03:23:43.884212] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:43.177 [2024-12-13 03:23:43.884223] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:43.177 [2024-12-13 03:23:43.884232] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:43.177 [2024-12-13 03:23:43.886490] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:43.177 [2024-12-13 03:23:43.886563] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:43.177 [2024-12-13 03:23:43.886658] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.177 [2024-12-13 03:23:43.886668] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:43.436 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:43.436 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:43.436 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:43.436 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:43.436 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.436 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:43.436 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:43.436 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.436 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.436 [2024-12-13 03:23:44.502067] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:43.436 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.436 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:43.436 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:43.436 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:43.436 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.436 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.436 Null1 00:12:43.436 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.436 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:43.436 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.436 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.436 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.436 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:43.436 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.436 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.436 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.436 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.436 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.436 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.436 [2024-12-13 03:23:44.563510] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.437 Null2 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.437 Null3 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.437 Null4 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.437 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.696 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.696 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:43.696 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.696 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.696 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.696 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:43.696 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.696 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.696 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.696 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:43.696 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.696 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.696 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.696 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:43.696 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.696 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.696 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.696 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:12:43.696 00:12:43.696 Discovery Log Number of Records 6, Generation counter 6 00:12:43.696 =====Discovery Log Entry 0====== 00:12:43.696 trtype: tcp 00:12:43.696 adrfam: ipv4 00:12:43.696 subtype: current discovery subsystem 00:12:43.696 treq: not required 00:12:43.696 portid: 0 00:12:43.696 trsvcid: 4420 00:12:43.696 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:43.696 traddr: 10.0.0.2 00:12:43.696 eflags: explicit discovery connections, duplicate discovery information 00:12:43.696 sectype: none 00:12:43.696 =====Discovery Log Entry 1====== 00:12:43.696 trtype: tcp 00:12:43.696 adrfam: ipv4 00:12:43.696 subtype: nvme subsystem 00:12:43.696 treq: not required 00:12:43.696 portid: 0 00:12:43.696 trsvcid: 4420 00:12:43.696 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:43.696 traddr: 10.0.0.2 00:12:43.696 eflags: none 00:12:43.696 sectype: none 00:12:43.696 =====Discovery Log Entry 2====== 00:12:43.696 trtype: tcp 00:12:43.696 adrfam: ipv4 00:12:43.696 subtype: nvme subsystem 00:12:43.696 treq: not required 00:12:43.696 portid: 0 00:12:43.696 trsvcid: 4420 00:12:43.696 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:43.696 traddr: 10.0.0.2 00:12:43.696 eflags: none 00:12:43.696 sectype: none 00:12:43.696 =====Discovery Log Entry 3====== 00:12:43.696 trtype: tcp 00:12:43.696 adrfam: ipv4 00:12:43.696 subtype: nvme subsystem 00:12:43.696 treq: not required 00:12:43.696 portid: 0 00:12:43.696 trsvcid: 4420 00:12:43.696 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:43.696 traddr: 10.0.0.2 00:12:43.696 eflags: none 00:12:43.696 sectype: none 00:12:43.696 =====Discovery Log Entry 4====== 00:12:43.696 trtype: tcp 00:12:43.696 adrfam: ipv4 00:12:43.696 subtype: nvme subsystem 00:12:43.696 treq: not required 00:12:43.696 portid: 0 00:12:43.696 trsvcid: 4420 00:12:43.696 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:43.696 traddr: 10.0.0.2 00:12:43.696 eflags: none 00:12:43.696 sectype: none 00:12:43.696 =====Discovery Log Entry 5====== 00:12:43.696 trtype: tcp 00:12:43.696 adrfam: ipv4 00:12:43.696 subtype: discovery subsystem referral 00:12:43.696 treq: not required 00:12:43.696 portid: 0 00:12:43.696 trsvcid: 4430 00:12:43.696 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:43.696 traddr: 10.0.0.2 00:12:43.696 eflags: none 00:12:43.696 sectype: none 00:12:43.696 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:43.696 Perform nvmf subsystem discovery via RPC 00:12:43.696 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:43.696 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.696 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.956 [ 00:12:43.956 { 00:12:43.956 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:43.956 "subtype": "Discovery", 00:12:43.956 "listen_addresses": [ 00:12:43.956 { 00:12:43.956 "trtype": "TCP", 00:12:43.956 "adrfam": "IPv4", 00:12:43.956 "traddr": "10.0.0.2", 00:12:43.956 "trsvcid": "4420" 00:12:43.956 } 00:12:43.956 ], 00:12:43.956 "allow_any_host": true, 00:12:43.956 "hosts": [] 00:12:43.956 }, 00:12:43.956 { 00:12:43.956 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:43.956 "subtype": "NVMe", 00:12:43.956 "listen_addresses": [ 00:12:43.956 { 00:12:43.956 "trtype": "TCP", 00:12:43.956 "adrfam": "IPv4", 00:12:43.956 "traddr": "10.0.0.2", 00:12:43.956 "trsvcid": "4420" 00:12:43.956 } 00:12:43.956 ], 00:12:43.956 "allow_any_host": true, 00:12:43.956 "hosts": [], 00:12:43.956 "serial_number": "SPDK00000000000001", 00:12:43.956 "model_number": "SPDK bdev Controller", 00:12:43.956 "max_namespaces": 32, 00:12:43.956 "min_cntlid": 1, 00:12:43.956 "max_cntlid": 65519, 00:12:43.956 "namespaces": [ 00:12:43.956 { 00:12:43.956 "nsid": 1, 00:12:43.956 "bdev_name": "Null1", 00:12:43.956 "name": "Null1", 00:12:43.956 "nguid": "EA457BB563804CBABDD6DD0EDDF7EE43", 00:12:43.956 "uuid": "ea457bb5-6380-4cba-bdd6-dd0eddf7ee43" 00:12:43.956 } 00:12:43.956 ] 00:12:43.956 }, 00:12:43.956 { 00:12:43.956 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:43.956 "subtype": "NVMe", 00:12:43.956 "listen_addresses": [ 00:12:43.956 { 00:12:43.956 "trtype": "TCP", 00:12:43.956 "adrfam": "IPv4", 00:12:43.956 "traddr": "10.0.0.2", 00:12:43.956 "trsvcid": "4420" 00:12:43.956 } 00:12:43.956 ], 00:12:43.956 "allow_any_host": true, 00:12:43.956 "hosts": [], 00:12:43.956 "serial_number": "SPDK00000000000002", 00:12:43.956 "model_number": "SPDK bdev Controller", 00:12:43.956 "max_namespaces": 32, 00:12:43.956 "min_cntlid": 1, 00:12:43.956 "max_cntlid": 65519, 00:12:43.956 "namespaces": [ 00:12:43.956 { 00:12:43.956 "nsid": 1, 00:12:43.956 "bdev_name": "Null2", 00:12:43.956 "name": "Null2", 00:12:43.956 "nguid": "648E062D0E284A2DA92E076BE5C09F53", 00:12:43.956 "uuid": "648e062d-0e28-4a2d-a92e-076be5c09f53" 00:12:43.956 } 00:12:43.956 ] 00:12:43.956 }, 00:12:43.956 { 00:12:43.956 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:43.956 "subtype": "NVMe", 00:12:43.956 "listen_addresses": [ 00:12:43.956 { 00:12:43.956 "trtype": "TCP", 00:12:43.956 "adrfam": "IPv4", 00:12:43.956 "traddr": "10.0.0.2", 00:12:43.956 "trsvcid": "4420" 00:12:43.956 } 00:12:43.956 ], 00:12:43.956 "allow_any_host": true, 00:12:43.956 "hosts": [], 00:12:43.956 "serial_number": "SPDK00000000000003", 00:12:43.956 "model_number": "SPDK bdev Controller", 00:12:43.956 "max_namespaces": 32, 00:12:43.956 "min_cntlid": 1, 00:12:43.956 "max_cntlid": 65519, 00:12:43.956 "namespaces": [ 00:12:43.956 { 00:12:43.956 "nsid": 1, 00:12:43.956 "bdev_name": "Null3", 00:12:43.956 "name": "Null3", 00:12:43.956 "nguid": "D2373AEAAD8A4248BCD7B34270FD3F1B", 00:12:43.956 "uuid": "d2373aea-ad8a-4248-bcd7-b34270fd3f1b" 00:12:43.956 } 00:12:43.956 ] 00:12:43.956 }, 00:12:43.956 { 00:12:43.956 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:43.956 "subtype": "NVMe", 00:12:43.956 "listen_addresses": [ 00:12:43.956 { 00:12:43.956 "trtype": "TCP", 00:12:43.956 "adrfam": "IPv4", 00:12:43.956 "traddr": "10.0.0.2", 00:12:43.956 "trsvcid": "4420" 00:12:43.956 } 00:12:43.956 ], 00:12:43.956 "allow_any_host": true, 00:12:43.956 "hosts": [], 00:12:43.956 "serial_number": "SPDK00000000000004", 00:12:43.956 "model_number": "SPDK bdev Controller", 00:12:43.956 "max_namespaces": 32, 00:12:43.956 "min_cntlid": 1, 00:12:43.956 "max_cntlid": 65519, 00:12:43.956 "namespaces": [ 00:12:43.956 { 00:12:43.956 "nsid": 1, 00:12:43.956 "bdev_name": "Null4", 00:12:43.956 "name": "Null4", 00:12:43.956 "nguid": "3EC3E647743642E2BF9BDE9A3678F38D", 00:12:43.956 "uuid": "3ec3e647-7436-42e2-bf9b-de9a3678f38d" 00:12:43.956 } 00:12:43.956 ] 00:12:43.956 } 00:12:43.956 ] 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.956 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:43.957 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.957 03:23:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.957 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.957 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:43.957 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:43.957 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.957 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:43.957 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.957 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:43.957 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:43.957 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:43.957 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:43.957 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:43.957 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:43.957 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:43.957 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:43.957 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:43.957 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:43.957 rmmod nvme_tcp 00:12:43.957 rmmod nvme_fabrics 00:12:43.957 rmmod nvme_keyring 00:12:43.957 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:43.957 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:43.957 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:43.957 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2581688 ']' 00:12:43.957 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2581688 00:12:43.957 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2581688 ']' 00:12:43.957 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2581688 00:12:43.957 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:43.957 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:43.957 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2581688 00:12:43.957 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:43.957 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:43.957 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2581688' 00:12:43.957 killing process with pid 2581688 00:12:43.957 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2581688 00:12:43.957 03:23:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2581688 00:12:45.334 03:23:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:45.334 03:23:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:45.334 03:23:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:45.334 03:23:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:45.334 03:23:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:45.334 03:23:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:45.334 03:23:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:45.334 03:23:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:45.334 03:23:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:45.334 03:23:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.335 03:23:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:45.335 03:23:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.240 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:47.240 00:12:47.240 real 0m10.857s 00:12:47.240 user 0m10.634s 00:12:47.240 sys 0m4.859s 00:12:47.240 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:47.240 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.240 ************************************ 00:12:47.240 END TEST nvmf_target_discovery 00:12:47.240 ************************************ 00:12:47.240 03:23:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:47.240 03:23:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:47.240 03:23:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:47.240 03:23:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:47.499 ************************************ 00:12:47.499 START TEST nvmf_referrals 00:12:47.499 ************************************ 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:47.499 * Looking for test storage... 00:12:47.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:47.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.499 --rc genhtml_branch_coverage=1 00:12:47.499 --rc genhtml_function_coverage=1 00:12:47.499 --rc genhtml_legend=1 00:12:47.499 --rc geninfo_all_blocks=1 00:12:47.499 --rc geninfo_unexecuted_blocks=1 00:12:47.499 00:12:47.499 ' 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:47.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.499 --rc genhtml_branch_coverage=1 00:12:47.499 --rc genhtml_function_coverage=1 00:12:47.499 --rc genhtml_legend=1 00:12:47.499 --rc geninfo_all_blocks=1 00:12:47.499 --rc geninfo_unexecuted_blocks=1 00:12:47.499 00:12:47.499 ' 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:47.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.499 --rc genhtml_branch_coverage=1 00:12:47.499 --rc genhtml_function_coverage=1 00:12:47.499 --rc genhtml_legend=1 00:12:47.499 --rc geninfo_all_blocks=1 00:12:47.499 --rc geninfo_unexecuted_blocks=1 00:12:47.499 00:12:47.499 ' 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:47.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.499 --rc genhtml_branch_coverage=1 00:12:47.499 --rc genhtml_function_coverage=1 00:12:47.499 --rc genhtml_legend=1 00:12:47.499 --rc geninfo_all_blocks=1 00:12:47.499 --rc geninfo_unexecuted_blocks=1 00:12:47.499 00:12:47.499 ' 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:47.499 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:47.499 03:23:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:52.784 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:52.784 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:52.784 Found net devices under 0000:af:00.0: cvl_0_0 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:52.784 Found net devices under 0000:af:00.1: cvl_0_1 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:52.784 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:52.784 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:52.784 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:12:52.784 00:12:52.784 --- 10.0.0.2 ping statistics --- 00:12:52.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.785 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:12:52.785 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:52.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:52.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:12:52.785 00:12:52.785 --- 10.0.0.1 ping statistics --- 00:12:52.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.785 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:12:52.785 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:52.785 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:52.785 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:52.785 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:52.785 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:52.785 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:52.785 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:52.785 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:52.785 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:53.044 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:53.044 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:53.044 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:53.044 03:23:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.044 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2585620 00:12:53.044 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2585620 00:12:53.044 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2585620 ']' 00:12:53.044 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.044 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:53.044 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.044 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:53.044 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:53.044 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.044 [2024-12-13 03:23:54.085138] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:53.044 [2024-12-13 03:23:54.085219] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:53.044 [2024-12-13 03:23:54.202405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:53.303 [2024-12-13 03:23:54.310908] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:53.303 [2024-12-13 03:23:54.310957] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:53.303 [2024-12-13 03:23:54.310967] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:53.303 [2024-12-13 03:23:54.310993] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:53.303 [2024-12-13 03:23:54.311002] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:53.303 [2024-12-13 03:23:54.313361] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.303 [2024-12-13 03:23:54.313437] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:53.303 [2024-12-13 03:23:54.313542] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.303 [2024-12-13 03:23:54.313551] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:53.871 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:53.871 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:53.871 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:53.871 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:53.871 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.871 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:53.871 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:53.871 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.871 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.871 [2024-12-13 03:23:54.923141] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:53.871 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.871 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:53.871 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.871 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.871 [2024-12-13 03:23:54.955496] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:53.871 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.871 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:53.871 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.871 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.871 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.871 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:53.872 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.872 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.872 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.872 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:53.872 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.872 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.872 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.872 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:53.872 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:53.872 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.872 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.872 03:23:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.872 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:53.872 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:53.872 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:53.872 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:53.872 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:53.872 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.872 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:53.872 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:53.872 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.872 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:53.872 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:53.872 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:53.872 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:53.872 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:54.130 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:54.130 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:54.130 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:54.130 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:54.130 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:54.130 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:54.130 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.130 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.130 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.130 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:54.130 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.130 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.130 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.130 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:54.130 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.130 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.130 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.130 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:54.130 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:54.130 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.130 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.130 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.130 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:54.130 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:54.130 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:54.130 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:54.388 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:54.388 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:54.388 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:54.388 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:54.388 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:54.388 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:54.388 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.388 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.388 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.388 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:54.388 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.388 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.388 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.388 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:54.388 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:54.388 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:54.388 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:54.388 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.388 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:54.388 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.388 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.388 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:54.388 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:54.388 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:54.388 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:54.388 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:54.388 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:54.388 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:54.388 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:54.646 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:54.646 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:54.646 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:54.646 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:54.647 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:54.647 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:54.647 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:54.905 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:54.905 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:54.905 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:54.905 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:54.905 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:54.905 03:23:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:54.905 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:54.905 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:54.905 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.905 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.905 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.905 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:54.905 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:54.906 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:54.906 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:54.906 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.906 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:54.906 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.906 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.906 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:54.906 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:54.906 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:54.906 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:54.906 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:54.906 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:54.906 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:54.906 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:55.164 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:55.164 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:55.164 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:55.164 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:55.164 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:55.164 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:55.165 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:55.423 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:55.423 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:55.423 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:55.423 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:55.423 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:55.423 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:55.681 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:55.681 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:55.681 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.681 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.681 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.681 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:55.681 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:55.681 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.681 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.681 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.681 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:55.681 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:55.681 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:55.681 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:55.681 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:55.681 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:55.681 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:55.940 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:55.940 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:55.940 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:55.940 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:55.940 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:55.940 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:55.940 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:55.940 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:55.940 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:55.940 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:55.940 rmmod nvme_tcp 00:12:55.940 rmmod nvme_fabrics 00:12:55.940 rmmod nvme_keyring 00:12:55.940 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:55.940 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:55.940 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:55.940 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2585620 ']' 00:12:55.940 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2585620 00:12:55.940 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2585620 ']' 00:12:55.940 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2585620 00:12:55.940 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:55.940 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:55.940 03:23:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2585620 00:12:55.940 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:55.940 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:55.940 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2585620' 00:12:55.940 killing process with pid 2585620 00:12:55.940 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2585620 00:12:55.940 03:23:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2585620 00:12:57.319 03:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:57.319 03:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:57.319 03:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:57.319 03:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:57.319 03:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:57.319 03:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:57.319 03:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:57.319 03:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:57.319 03:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:57.319 03:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.319 03:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:57.319 03:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.224 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:59.224 00:12:59.224 real 0m11.790s 00:12:59.224 user 0m16.916s 00:12:59.224 sys 0m4.854s 00:12:59.224 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:59.224 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:59.224 ************************************ 00:12:59.224 END TEST nvmf_referrals 00:12:59.224 ************************************ 00:12:59.224 03:24:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:59.224 03:24:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:59.224 03:24:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:59.224 03:24:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:59.224 ************************************ 00:12:59.224 START TEST nvmf_connect_disconnect 00:12:59.224 ************************************ 00:12:59.224 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:59.224 * Looking for test storage... 00:12:59.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:59.224 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:59.224 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:12:59.224 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:59.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.484 --rc genhtml_branch_coverage=1 00:12:59.484 --rc genhtml_function_coverage=1 00:12:59.484 --rc genhtml_legend=1 00:12:59.484 --rc geninfo_all_blocks=1 00:12:59.484 --rc geninfo_unexecuted_blocks=1 00:12:59.484 00:12:59.484 ' 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:59.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.484 --rc genhtml_branch_coverage=1 00:12:59.484 --rc genhtml_function_coverage=1 00:12:59.484 --rc genhtml_legend=1 00:12:59.484 --rc geninfo_all_blocks=1 00:12:59.484 --rc geninfo_unexecuted_blocks=1 00:12:59.484 00:12:59.484 ' 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:59.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.484 --rc genhtml_branch_coverage=1 00:12:59.484 --rc genhtml_function_coverage=1 00:12:59.484 --rc genhtml_legend=1 00:12:59.484 --rc geninfo_all_blocks=1 00:12:59.484 --rc geninfo_unexecuted_blocks=1 00:12:59.484 00:12:59.484 ' 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:59.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.484 --rc genhtml_branch_coverage=1 00:12:59.484 --rc genhtml_function_coverage=1 00:12:59.484 --rc genhtml_legend=1 00:12:59.484 --rc geninfo_all_blocks=1 00:12:59.484 --rc geninfo_unexecuted_blocks=1 00:12:59.484 00:12:59.484 ' 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:59.484 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:59.485 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:59.485 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:59.485 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:59.485 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:59.485 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:59.485 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.485 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.485 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.485 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:59.485 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.485 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:59.485 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:59.485 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:59.485 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:59.485 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:59.485 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:59.485 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:59.485 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:59.485 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:59.485 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:59.485 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:59.485 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:59.485 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:59.485 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:59.485 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:59.485 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:59.485 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:59.485 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:59.485 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:59.485 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.485 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:59.485 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.485 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:59.485 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:59.485 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:59.485 03:24:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:04.757 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:04.757 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:04.757 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:04.758 Found net devices under 0000:af:00.0: cvl_0_0 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:04.758 Found net devices under 0000:af:00.1: cvl_0_1 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:04.758 03:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:05.017 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:05.017 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:05.017 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:05.017 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:05.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:05.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:13:05.017 00:13:05.017 --- 10.0.0.2 ping statistics --- 00:13:05.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:05.017 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:13:05.017 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:05.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:05.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:13:05.017 00:13:05.017 --- 10.0.0.1 ping statistics --- 00:13:05.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:05.017 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:13:05.017 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:05.017 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:13:05.017 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:05.017 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:05.017 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:05.017 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:05.017 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:05.017 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:05.017 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:05.017 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:05.017 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:05.017 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:05.017 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:05.017 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2589930 00:13:05.017 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2589930 00:13:05.017 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:05.017 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2589930 ']' 00:13:05.017 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.017 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:05.017 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.017 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:05.017 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:05.017 [2024-12-13 03:24:06.166648] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:05.017 [2024-12-13 03:24:06.166739] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:05.276 [2024-12-13 03:24:06.286053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:05.276 [2024-12-13 03:24:06.396768] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:05.276 [2024-12-13 03:24:06.396810] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:05.276 [2024-12-13 03:24:06.396821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:05.276 [2024-12-13 03:24:06.396832] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:05.276 [2024-12-13 03:24:06.396840] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:05.276 [2024-12-13 03:24:06.399004] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:13:05.276 [2024-12-13 03:24:06.399076] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:13:05.276 [2024-12-13 03:24:06.399137] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.276 [2024-12-13 03:24:06.399147] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:13:05.842 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:05.842 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:13:05.842 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:05.842 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:05.842 03:24:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:05.842 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:05.842 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:05.842 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.842 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:05.842 [2024-12-13 03:24:07.019361] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:05.842 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.842 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:05.842 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.842 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:06.100 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.100 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:06.100 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:06.100 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.100 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:06.100 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.101 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:06.101 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.101 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:06.101 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.101 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:06.101 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.101 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:06.101 [2024-12-13 03:24:07.143374] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.101 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.101 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:13:06.101 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:13:06.101 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:13:06.101 03:24:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:08.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.154 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.583 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.518 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.039 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.537 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.441 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.977 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.417 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.434 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:52.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.892 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.406 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.940 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.489 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:34.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:39.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:41.438 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.991 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:45.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:48.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:52.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:55.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:59.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:59.848 03:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:59.848 03:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:59.848 03:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:59.848 03:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:59.848 03:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:59.848 03:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:59.848 03:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:59.848 03:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:59.848 rmmod nvme_tcp 00:16:59.848 rmmod nvme_fabrics 00:16:59.848 rmmod nvme_keyring 00:16:59.848 03:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:59.848 03:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:59.848 03:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:59.848 03:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2589930 ']' 00:16:59.848 03:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2589930 00:16:59.848 03:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2589930 ']' 00:16:59.848 03:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2589930 00:16:59.848 03:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:16:59.848 03:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:59.848 03:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2589930 00:16:59.848 03:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:59.848 03:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:59.848 03:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2589930' 00:16:59.848 killing process with pid 2589930 00:16:59.848 03:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2589930 00:16:59.848 03:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2589930 00:17:01.226 03:28:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:01.227 03:28:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:01.227 03:28:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:01.227 03:28:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:17:01.227 03:28:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:01.227 03:28:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:17:01.227 03:28:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:17:01.227 03:28:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:01.227 03:28:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:01.227 03:28:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.227 03:28:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:01.227 03:28:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:03.762 00:17:03.762 real 4m4.069s 00:17:03.762 user 15m31.997s 00:17:03.762 sys 0m25.342s 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:03.762 ************************************ 00:17:03.762 END TEST nvmf_connect_disconnect 00:17:03.762 ************************************ 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:03.762 ************************************ 00:17:03.762 START TEST nvmf_multitarget 00:17:03.762 ************************************ 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:03.762 * Looking for test storage... 00:17:03.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:03.762 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:03.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.763 --rc genhtml_branch_coverage=1 00:17:03.763 --rc genhtml_function_coverage=1 00:17:03.763 --rc genhtml_legend=1 00:17:03.763 --rc geninfo_all_blocks=1 00:17:03.763 --rc geninfo_unexecuted_blocks=1 00:17:03.763 00:17:03.763 ' 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:03.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.763 --rc genhtml_branch_coverage=1 00:17:03.763 --rc genhtml_function_coverage=1 00:17:03.763 --rc genhtml_legend=1 00:17:03.763 --rc geninfo_all_blocks=1 00:17:03.763 --rc geninfo_unexecuted_blocks=1 00:17:03.763 00:17:03.763 ' 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:03.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.763 --rc genhtml_branch_coverage=1 00:17:03.763 --rc genhtml_function_coverage=1 00:17:03.763 --rc genhtml_legend=1 00:17:03.763 --rc geninfo_all_blocks=1 00:17:03.763 --rc geninfo_unexecuted_blocks=1 00:17:03.763 00:17:03.763 ' 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:03.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.763 --rc genhtml_branch_coverage=1 00:17:03.763 --rc genhtml_function_coverage=1 00:17:03.763 --rc genhtml_legend=1 00:17:03.763 --rc geninfo_all_blocks=1 00:17:03.763 --rc geninfo_unexecuted_blocks=1 00:17:03.763 00:17:03.763 ' 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:03.763 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:17:03.763 03:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:09.036 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:09.036 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:09.036 Found net devices under 0000:af:00.0: cvl_0_0 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:09.036 Found net devices under 0000:af:00.1: cvl_0_1 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:09.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:09.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:17:09.036 00:17:09.036 --- 10.0.0.2 ping statistics --- 00:17:09.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.036 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:17:09.036 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:09.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:09.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:17:09.036 00:17:09.037 --- 10.0.0.1 ping statistics --- 00:17:09.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.037 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:17:09.037 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:09.037 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:17:09.037 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:09.037 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:09.037 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:09.037 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:09.037 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:09.037 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:09.037 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:09.037 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:17:09.037 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:09.037 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:09.037 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:09.037 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2633615 00:17:09.037 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2633615 00:17:09.037 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:09.037 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2633615 ']' 00:17:09.037 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.037 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:09.037 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.037 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:09.037 03:28:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:09.037 [2024-12-13 03:28:09.709625] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:09.037 [2024-12-13 03:28:09.709714] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.037 [2024-12-13 03:28:09.827741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:09.037 [2024-12-13 03:28:09.932343] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.037 [2024-12-13 03:28:09.932388] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.037 [2024-12-13 03:28:09.932399] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:09.037 [2024-12-13 03:28:09.932408] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:09.037 [2024-12-13 03:28:09.932415] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.037 [2024-12-13 03:28:09.934750] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.037 [2024-12-13 03:28:09.934824] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:09.037 [2024-12-13 03:28:09.934886] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.037 [2024-12-13 03:28:09.934896] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:09.605 03:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:09.605 03:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:17:09.605 03:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:09.605 03:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:09.605 03:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:09.605 03:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:09.605 03:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:09.605 03:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:09.605 03:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:17:09.605 03:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:17:09.605 03:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:17:09.605 "nvmf_tgt_1" 00:17:09.605 03:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:17:09.864 "nvmf_tgt_2" 00:17:09.864 03:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:09.864 03:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:17:09.864 03:28:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:17:09.864 03:28:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:10.123 true 00:17:10.123 03:28:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:10.123 true 00:17:10.123 03:28:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:10.123 03:28:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:17:10.123 03:28:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:10.123 03:28:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:10.123 03:28:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:17:10.382 03:28:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:10.382 03:28:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:17:10.382 03:28:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:10.382 03:28:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:17:10.382 03:28:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:10.382 03:28:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:10.382 rmmod nvme_tcp 00:17:10.382 rmmod nvme_fabrics 00:17:10.382 rmmod nvme_keyring 00:17:10.382 03:28:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:10.382 03:28:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:17:10.382 03:28:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:17:10.382 03:28:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2633615 ']' 00:17:10.382 03:28:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2633615 00:17:10.382 03:28:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2633615 ']' 00:17:10.382 03:28:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2633615 00:17:10.382 03:28:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:17:10.382 03:28:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:10.382 03:28:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2633615 00:17:10.382 03:28:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:10.382 03:28:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:10.382 03:28:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2633615' 00:17:10.382 killing process with pid 2633615 00:17:10.382 03:28:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2633615 00:17:10.382 03:28:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2633615 00:17:11.760 03:28:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:11.760 03:28:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:11.760 03:28:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:11.760 03:28:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:17:11.760 03:28:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:17:11.760 03:28:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:11.760 03:28:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:17:11.760 03:28:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:11.760 03:28:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:11.760 03:28:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.760 03:28:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:11.760 03:28:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.665 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:13.665 00:17:13.665 real 0m10.197s 00:17:13.665 user 0m12.357s 00:17:13.665 sys 0m4.255s 00:17:13.665 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:13.665 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:13.665 ************************************ 00:17:13.665 END TEST nvmf_multitarget 00:17:13.665 ************************************ 00:17:13.665 03:28:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:13.665 03:28:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:13.665 03:28:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:13.665 03:28:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:13.665 ************************************ 00:17:13.665 START TEST nvmf_rpc 00:17:13.665 ************************************ 00:17:13.665 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:13.665 * Looking for test storage... 00:17:13.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:13.665 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:13.665 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:17:13.665 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:13.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.925 --rc genhtml_branch_coverage=1 00:17:13.925 --rc genhtml_function_coverage=1 00:17:13.925 --rc genhtml_legend=1 00:17:13.925 --rc geninfo_all_blocks=1 00:17:13.925 --rc geninfo_unexecuted_blocks=1 00:17:13.925 00:17:13.925 ' 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:13.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.925 --rc genhtml_branch_coverage=1 00:17:13.925 --rc genhtml_function_coverage=1 00:17:13.925 --rc genhtml_legend=1 00:17:13.925 --rc geninfo_all_blocks=1 00:17:13.925 --rc geninfo_unexecuted_blocks=1 00:17:13.925 00:17:13.925 ' 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:13.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.925 --rc genhtml_branch_coverage=1 00:17:13.925 --rc genhtml_function_coverage=1 00:17:13.925 --rc genhtml_legend=1 00:17:13.925 --rc geninfo_all_blocks=1 00:17:13.925 --rc geninfo_unexecuted_blocks=1 00:17:13.925 00:17:13.925 ' 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:13.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.925 --rc genhtml_branch_coverage=1 00:17:13.925 --rc genhtml_function_coverage=1 00:17:13.925 --rc genhtml_legend=1 00:17:13.925 --rc geninfo_all_blocks=1 00:17:13.925 --rc geninfo_unexecuted_blocks=1 00:17:13.925 00:17:13.925 ' 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:13.925 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:13.926 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:17:13.926 03:28:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:19.198 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:19.198 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:19.198 Found net devices under 0000:af:00.0: cvl_0_0 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:19.198 Found net devices under 0000:af:00.1: cvl_0_1 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:19.198 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:19.199 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:19.199 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:19.199 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:19.199 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:19.199 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:19.199 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:19.199 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:19.199 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:19.199 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:19.199 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:19.199 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:19.199 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:19.199 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:19.199 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:19.199 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:19.199 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:19.199 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:19.199 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:19.458 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:19.458 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:19.458 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:19.458 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:19.458 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:19.458 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:17:19.458 00:17:19.458 --- 10.0.0.2 ping statistics --- 00:17:19.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.458 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:17:19.458 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:19.458 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:19.458 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:17:19.458 00:17:19.458 --- 10.0.0.1 ping statistics --- 00:17:19.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.458 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:17:19.458 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:19.458 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:17:19.458 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:19.458 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:19.458 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:19.458 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:19.458 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:19.458 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:19.458 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:19.458 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:19.458 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:19.458 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:19.458 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.458 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2637516 00:17:19.458 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2637516 00:17:19.458 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:19.458 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2637516 ']' 00:17:19.458 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.458 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:19.458 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.458 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:19.458 03:28:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.458 [2024-12-13 03:28:20.611707] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:19.458 [2024-12-13 03:28:20.611847] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:19.717 [2024-12-13 03:28:20.732882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:19.717 [2024-12-13 03:28:20.844730] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:19.717 [2024-12-13 03:28:20.844770] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:19.717 [2024-12-13 03:28:20.844781] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:19.717 [2024-12-13 03:28:20.844792] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:19.717 [2024-12-13 03:28:20.844800] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:19.717 [2024-12-13 03:28:20.847110] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.717 [2024-12-13 03:28:20.847187] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:19.717 [2024-12-13 03:28:20.847251] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.717 [2024-12-13 03:28:20.847257] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:20.285 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:20.285 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:20.285 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:20.285 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:20.285 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.285 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:20.285 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:20.285 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.285 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.285 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.285 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:20.285 "tick_rate": 2100000000, 00:17:20.285 "poll_groups": [ 00:17:20.285 { 00:17:20.285 "name": "nvmf_tgt_poll_group_000", 00:17:20.285 "admin_qpairs": 0, 00:17:20.285 "io_qpairs": 0, 00:17:20.285 "current_admin_qpairs": 0, 00:17:20.285 "current_io_qpairs": 0, 00:17:20.285 "pending_bdev_io": 0, 00:17:20.285 "completed_nvme_io": 0, 00:17:20.285 "transports": [] 00:17:20.285 }, 00:17:20.285 { 00:17:20.285 "name": "nvmf_tgt_poll_group_001", 00:17:20.285 "admin_qpairs": 0, 00:17:20.285 "io_qpairs": 0, 00:17:20.285 "current_admin_qpairs": 0, 00:17:20.285 "current_io_qpairs": 0, 00:17:20.285 "pending_bdev_io": 0, 00:17:20.285 "completed_nvme_io": 0, 00:17:20.285 "transports": [] 00:17:20.285 }, 00:17:20.285 { 00:17:20.285 "name": "nvmf_tgt_poll_group_002", 00:17:20.285 "admin_qpairs": 0, 00:17:20.285 "io_qpairs": 0, 00:17:20.285 "current_admin_qpairs": 0, 00:17:20.285 "current_io_qpairs": 0, 00:17:20.285 "pending_bdev_io": 0, 00:17:20.285 "completed_nvme_io": 0, 00:17:20.285 "transports": [] 00:17:20.285 }, 00:17:20.285 { 00:17:20.285 "name": "nvmf_tgt_poll_group_003", 00:17:20.285 "admin_qpairs": 0, 00:17:20.285 "io_qpairs": 0, 00:17:20.285 "current_admin_qpairs": 0, 00:17:20.285 "current_io_qpairs": 0, 00:17:20.285 "pending_bdev_io": 0, 00:17:20.285 "completed_nvme_io": 0, 00:17:20.285 "transports": [] 00:17:20.285 } 00:17:20.285 ] 00:17:20.285 }' 00:17:20.285 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:20.285 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:20.285 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:20.285 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:20.544 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:20.544 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:20.544 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:20.544 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:20.544 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.544 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.544 [2024-12-13 03:28:21.576857] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:20.544 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.544 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:20.544 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.544 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.544 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.544 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:20.544 "tick_rate": 2100000000, 00:17:20.544 "poll_groups": [ 00:17:20.544 { 00:17:20.544 "name": "nvmf_tgt_poll_group_000", 00:17:20.544 "admin_qpairs": 0, 00:17:20.544 "io_qpairs": 0, 00:17:20.544 "current_admin_qpairs": 0, 00:17:20.544 "current_io_qpairs": 0, 00:17:20.544 "pending_bdev_io": 0, 00:17:20.544 "completed_nvme_io": 0, 00:17:20.544 "transports": [ 00:17:20.544 { 00:17:20.544 "trtype": "TCP" 00:17:20.544 } 00:17:20.544 ] 00:17:20.544 }, 00:17:20.544 { 00:17:20.544 "name": "nvmf_tgt_poll_group_001", 00:17:20.544 "admin_qpairs": 0, 00:17:20.544 "io_qpairs": 0, 00:17:20.544 "current_admin_qpairs": 0, 00:17:20.544 "current_io_qpairs": 0, 00:17:20.544 "pending_bdev_io": 0, 00:17:20.544 "completed_nvme_io": 0, 00:17:20.544 "transports": [ 00:17:20.544 { 00:17:20.544 "trtype": "TCP" 00:17:20.544 } 00:17:20.544 ] 00:17:20.544 }, 00:17:20.544 { 00:17:20.544 "name": "nvmf_tgt_poll_group_002", 00:17:20.544 "admin_qpairs": 0, 00:17:20.544 "io_qpairs": 0, 00:17:20.544 "current_admin_qpairs": 0, 00:17:20.544 "current_io_qpairs": 0, 00:17:20.544 "pending_bdev_io": 0, 00:17:20.544 "completed_nvme_io": 0, 00:17:20.544 "transports": [ 00:17:20.544 { 00:17:20.544 "trtype": "TCP" 00:17:20.544 } 00:17:20.544 ] 00:17:20.544 }, 00:17:20.544 { 00:17:20.544 "name": "nvmf_tgt_poll_group_003", 00:17:20.544 "admin_qpairs": 0, 00:17:20.544 "io_qpairs": 0, 00:17:20.544 "current_admin_qpairs": 0, 00:17:20.544 "current_io_qpairs": 0, 00:17:20.544 "pending_bdev_io": 0, 00:17:20.544 "completed_nvme_io": 0, 00:17:20.544 "transports": [ 00:17:20.544 { 00:17:20.544 "trtype": "TCP" 00:17:20.544 } 00:17:20.544 ] 00:17:20.544 } 00:17:20.544 ] 00:17:20.544 }' 00:17:20.544 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:20.544 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:20.544 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:20.544 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:20.544 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:20.544 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:20.544 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:20.544 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:20.544 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:20.544 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:20.544 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:17:20.545 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:20.545 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:20.545 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:20.545 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.545 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.804 Malloc1 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.804 [2024-12-13 03:28:21.821651] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:17:20.804 [2024-12-13 03:28:21.851000] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:17:20.804 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:20.804 could not add new controller: failed to write to nvme-fabrics device 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.804 03:28:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:22.183 03:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:22.183 03:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:22.183 03:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:22.183 03:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:22.183 03:28:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:24.086 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:24.086 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:24.086 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:24.086 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:24.086 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:24.086 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:24.086 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:24.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:24.344 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:24.344 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:24.344 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:24.344 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:24.344 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:24.344 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:24.344 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:24.344 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:24.344 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.344 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.344 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.344 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:24.344 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:24.344 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:24.344 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:24.344 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.344 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:24.344 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.345 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:24.345 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.345 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:24.345 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:24.345 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:24.345 [2024-12-13 03:28:25.440202] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:17:24.345 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:24.345 could not add new controller: failed to write to nvme-fabrics device 00:17:24.345 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:24.345 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:24.345 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:24.345 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:24.345 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:24.345 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.345 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.345 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.345 03:28:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:25.801 03:28:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:25.801 03:28:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:25.801 03:28:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:25.801 03:28:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:25.801 03:28:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:27.711 03:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:27.711 03:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:27.711 03:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:27.711 03:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:27.711 03:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:27.711 03:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:27.711 03:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:27.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:27.711 03:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:27.711 03:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:27.711 03:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:27.711 03:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:27.711 03:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:27.711 03:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:27.711 03:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:27.711 03:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:27.711 03:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.711 03:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.711 03:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.711 03:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:27.970 03:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:27.970 03:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:27.970 03:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.970 03:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.970 03:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.970 03:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:27.970 03:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.970 03:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.970 [2024-12-13 03:28:28.937706] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:27.970 03:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.970 03:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:27.970 03:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.970 03:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.970 03:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.970 03:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:27.970 03:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.970 03:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.970 03:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.970 03:28:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:28.907 03:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:28.907 03:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:28.907 03:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:28.907 03:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:28.907 03:28:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:31.441 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:31.441 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:31.441 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:31.441 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:31.441 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:31.441 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:31.441 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:31.441 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:31.441 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:31.441 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:31.441 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:31.441 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:31.442 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:31.442 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:31.442 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:31.442 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:31.442 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.442 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.442 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.442 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:31.442 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.442 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.442 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.442 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:31.442 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:31.442 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.442 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.442 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.442 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:31.442 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.442 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.442 [2024-12-13 03:28:32.423527] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.442 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.442 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:31.442 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.442 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.442 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.442 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:31.442 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.442 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.442 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.442 03:28:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:32.379 03:28:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:32.379 03:28:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:32.379 03:28:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:32.379 03:28:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:32.379 03:28:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:34.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.912 [2024-12-13 03:28:35.873613] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.912 03:28:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:35.849 03:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:35.849 03:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:35.849 03:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:35.849 03:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:35.849 03:28:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:38.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:38.383 [2024-12-13 03:28:39.368294] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.383 03:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:39.761 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:39.761 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:39.761 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:39.761 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:39.761 03:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:41.665 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:41.665 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:41.666 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:41.666 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:41.666 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:41.666 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:41.666 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:41.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:41.666 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:41.666 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:41.666 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:41.666 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:41.666 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:41.666 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:41.924 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:41.924 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:41.924 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.924 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:41.924 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.924 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:41.924 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.924 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:41.924 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.924 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:41.924 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:41.924 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.924 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:41.924 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.924 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:41.925 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.925 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:41.925 [2024-12-13 03:28:42.908429] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:41.925 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.925 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:41.925 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.925 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:41.925 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.925 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:41.925 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.925 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:41.925 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.925 03:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:42.860 03:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:42.860 03:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:42.860 03:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:42.860 03:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:42.860 03:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:45.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.393 [2024-12-13 03:28:46.399565] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:45.393 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.394 [2024-12-13 03:28:46.447687] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.394 [2024-12-13 03:28:46.495871] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.394 [2024-12-13 03:28:46.544042] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.394 [2024-12-13 03:28:46.592211] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.394 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:45.654 "tick_rate": 2100000000, 00:17:45.654 "poll_groups": [ 00:17:45.654 { 00:17:45.654 "name": "nvmf_tgt_poll_group_000", 00:17:45.654 "admin_qpairs": 2, 00:17:45.654 "io_qpairs": 168, 00:17:45.654 "current_admin_qpairs": 0, 00:17:45.654 "current_io_qpairs": 0, 00:17:45.654 "pending_bdev_io": 0, 00:17:45.654 "completed_nvme_io": 169, 00:17:45.654 "transports": [ 00:17:45.654 { 00:17:45.654 "trtype": "TCP" 00:17:45.654 } 00:17:45.654 ] 00:17:45.654 }, 00:17:45.654 { 00:17:45.654 "name": "nvmf_tgt_poll_group_001", 00:17:45.654 "admin_qpairs": 2, 00:17:45.654 "io_qpairs": 168, 00:17:45.654 "current_admin_qpairs": 0, 00:17:45.654 "current_io_qpairs": 0, 00:17:45.654 "pending_bdev_io": 0, 00:17:45.654 "completed_nvme_io": 267, 00:17:45.654 "transports": [ 00:17:45.654 { 00:17:45.654 "trtype": "TCP" 00:17:45.654 } 00:17:45.654 ] 00:17:45.654 }, 00:17:45.654 { 00:17:45.654 "name": "nvmf_tgt_poll_group_002", 00:17:45.654 "admin_qpairs": 1, 00:17:45.654 "io_qpairs": 168, 00:17:45.654 "current_admin_qpairs": 0, 00:17:45.654 "current_io_qpairs": 0, 00:17:45.654 "pending_bdev_io": 0, 00:17:45.654 "completed_nvme_io": 368, 00:17:45.654 "transports": [ 00:17:45.654 { 00:17:45.654 "trtype": "TCP" 00:17:45.654 } 00:17:45.654 ] 00:17:45.654 }, 00:17:45.654 { 00:17:45.654 "name": "nvmf_tgt_poll_group_003", 00:17:45.654 "admin_qpairs": 2, 00:17:45.654 "io_qpairs": 168, 00:17:45.654 "current_admin_qpairs": 0, 00:17:45.654 "current_io_qpairs": 0, 00:17:45.654 "pending_bdev_io": 0, 00:17:45.654 "completed_nvme_io": 218, 00:17:45.654 "transports": [ 00:17:45.654 { 00:17:45.654 "trtype": "TCP" 00:17:45.654 } 00:17:45.654 ] 00:17:45.654 } 00:17:45.654 ] 00:17:45.654 }' 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:45.654 rmmod nvme_tcp 00:17:45.654 rmmod nvme_fabrics 00:17:45.654 rmmod nvme_keyring 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:45.654 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2637516 ']' 00:17:45.655 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2637516 00:17:45.655 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2637516 ']' 00:17:45.655 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2637516 00:17:45.655 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:17:45.655 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:45.655 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2637516 00:17:45.655 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:45.655 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:45.655 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2637516' 00:17:45.655 killing process with pid 2637516 00:17:45.655 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2637516 00:17:45.655 03:28:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2637516 00:17:47.033 03:28:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:47.033 03:28:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:47.033 03:28:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:47.033 03:28:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:47.033 03:28:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:17:47.033 03:28:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:47.033 03:28:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:17:47.033 03:28:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:47.033 03:28:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:47.033 03:28:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.033 03:28:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:47.033 03:28:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:49.570 00:17:49.570 real 0m35.526s 00:17:49.570 user 1m49.615s 00:17:49.570 sys 0m6.402s 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:49.570 ************************************ 00:17:49.570 END TEST nvmf_rpc 00:17:49.570 ************************************ 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:49.570 ************************************ 00:17:49.570 START TEST nvmf_invalid 00:17:49.570 ************************************ 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:49.570 * Looking for test storage... 00:17:49.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:49.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.570 --rc genhtml_branch_coverage=1 00:17:49.570 --rc genhtml_function_coverage=1 00:17:49.570 --rc genhtml_legend=1 00:17:49.570 --rc geninfo_all_blocks=1 00:17:49.570 --rc geninfo_unexecuted_blocks=1 00:17:49.570 00:17:49.570 ' 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:49.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.570 --rc genhtml_branch_coverage=1 00:17:49.570 --rc genhtml_function_coverage=1 00:17:49.570 --rc genhtml_legend=1 00:17:49.570 --rc geninfo_all_blocks=1 00:17:49.570 --rc geninfo_unexecuted_blocks=1 00:17:49.570 00:17:49.570 ' 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:49.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.570 --rc genhtml_branch_coverage=1 00:17:49.570 --rc genhtml_function_coverage=1 00:17:49.570 --rc genhtml_legend=1 00:17:49.570 --rc geninfo_all_blocks=1 00:17:49.570 --rc geninfo_unexecuted_blocks=1 00:17:49.570 00:17:49.570 ' 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:49.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.570 --rc genhtml_branch_coverage=1 00:17:49.570 --rc genhtml_function_coverage=1 00:17:49.570 --rc genhtml_legend=1 00:17:49.570 --rc geninfo_all_blocks=1 00:17:49.570 --rc geninfo_unexecuted_blocks=1 00:17:49.570 00:17:49.570 ' 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:49.570 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:49.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:49.571 03:28:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:54.846 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:54.846 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:54.846 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:54.846 Found net devices under 0000:af:00.0: cvl_0_0 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:54.847 Found net devices under 0000:af:00.1: cvl_0_1 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:54.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:54.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:17:54.847 00:17:54.847 --- 10.0.0.2 ping statistics --- 00:17:54.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.847 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:54.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:54.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:17:54.847 00:17:54.847 --- 10.0.0.1 ping statistics --- 00:17:54.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.847 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2645477 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2645477 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2645477 ']' 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.847 03:28:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:54.847 [2024-12-13 03:28:55.966131] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:54.847 [2024-12-13 03:28:55.966224] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:55.107 [2024-12-13 03:28:56.082829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:55.107 [2024-12-13 03:28:56.196578] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:55.107 [2024-12-13 03:28:56.196618] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:55.107 [2024-12-13 03:28:56.196628] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:55.107 [2024-12-13 03:28:56.196639] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:55.107 [2024-12-13 03:28:56.196647] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:55.107 [2024-12-13 03:28:56.198936] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:55.107 [2024-12-13 03:28:56.198951] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:55.107 [2024-12-13 03:28:56.198969] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.107 [2024-12-13 03:28:56.198973] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:55.675 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:55.675 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:17:55.675 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:55.675 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:55.675 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:55.675 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:55.675 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:55.675 03:28:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode31987 00:17:55.934 [2024-12-13 03:28:57.006708] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:55.934 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:55.934 { 00:17:55.934 "nqn": "nqn.2016-06.io.spdk:cnode31987", 00:17:55.934 "tgt_name": "foobar", 00:17:55.934 "method": "nvmf_create_subsystem", 00:17:55.934 "req_id": 1 00:17:55.934 } 00:17:55.934 Got JSON-RPC error response 00:17:55.934 response: 00:17:55.934 { 00:17:55.934 "code": -32603, 00:17:55.934 "message": "Unable to find target foobar" 00:17:55.934 }' 00:17:55.934 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:55.934 { 00:17:55.934 "nqn": "nqn.2016-06.io.spdk:cnode31987", 00:17:55.934 "tgt_name": "foobar", 00:17:55.934 "method": "nvmf_create_subsystem", 00:17:55.934 "req_id": 1 00:17:55.934 } 00:17:55.934 Got JSON-RPC error response 00:17:55.934 response: 00:17:55.934 { 00:17:55.934 "code": -32603, 00:17:55.934 "message": "Unable to find target foobar" 00:17:55.934 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:55.934 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:55.934 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode4164 00:17:56.193 [2024-12-13 03:28:57.203416] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4164: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:56.193 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:56.193 { 00:17:56.193 "nqn": "nqn.2016-06.io.spdk:cnode4164", 00:17:56.193 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:56.193 "method": "nvmf_create_subsystem", 00:17:56.193 "req_id": 1 00:17:56.193 } 00:17:56.193 Got JSON-RPC error response 00:17:56.193 response: 00:17:56.193 { 00:17:56.193 "code": -32602, 00:17:56.193 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:56.193 }' 00:17:56.193 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:56.193 { 00:17:56.193 "nqn": "nqn.2016-06.io.spdk:cnode4164", 00:17:56.193 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:56.193 "method": "nvmf_create_subsystem", 00:17:56.193 "req_id": 1 00:17:56.193 } 00:17:56.193 Got JSON-RPC error response 00:17:56.193 response: 00:17:56.193 { 00:17:56.193 "code": -32602, 00:17:56.193 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:56.193 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:56.193 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:56.193 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode3496 00:17:56.193 [2024-12-13 03:28:57.396090] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3496: invalid model number 'SPDK_Controller' 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:56.453 { 00:17:56.453 "nqn": "nqn.2016-06.io.spdk:cnode3496", 00:17:56.453 "model_number": "SPDK_Controller\u001f", 00:17:56.453 "method": "nvmf_create_subsystem", 00:17:56.453 "req_id": 1 00:17:56.453 } 00:17:56.453 Got JSON-RPC error response 00:17:56.453 response: 00:17:56.453 { 00:17:56.453 "code": -32602, 00:17:56.453 "message": "Invalid MN SPDK_Controller\u001f" 00:17:56.453 }' 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:56.453 { 00:17:56.453 "nqn": "nqn.2016-06.io.spdk:cnode3496", 00:17:56.453 "model_number": "SPDK_Controller\u001f", 00:17:56.453 "method": "nvmf_create_subsystem", 00:17:56.453 "req_id": 1 00:17:56.453 } 00:17:56.453 Got JSON-RPC error response 00:17:56.453 response: 00:17:56.453 { 00:17:56.453 "code": -32602, 00:17:56.453 "message": "Invalid MN SPDK_Controller\u001f" 00:17:56.453 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:56.453 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.454 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.454 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:17:56.454 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:17:56.454 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:17:56.454 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.454 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.454 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:17:56.454 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:17:56.454 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:17:56.454 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.454 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.454 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:17:56.454 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:17:56.454 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:17:56.454 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.454 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.454 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:17:56.454 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:17:56.454 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:17:56.454 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.454 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.454 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:17:56.454 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:17:56.454 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:17:56.454 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.454 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.454 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ T == \- ]] 00:17:56.454 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'T$H:%zkRfq{&Jr3mdl]pu' 00:17:56.454 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'T$H:%zkRfq{&Jr3mdl]pu' nqn.2016-06.io.spdk:cnode12956 00:17:56.714 [2024-12-13 03:28:57.741252] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12956: invalid serial number 'T$H:%zkRfq{&Jr3mdl]pu' 00:17:56.714 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:56.714 { 00:17:56.714 "nqn": "nqn.2016-06.io.spdk:cnode12956", 00:17:56.714 "serial_number": "T$H:%zkRfq{&Jr3mdl]pu", 00:17:56.714 "method": "nvmf_create_subsystem", 00:17:56.714 "req_id": 1 00:17:56.714 } 00:17:56.714 Got JSON-RPC error response 00:17:56.714 response: 00:17:56.714 { 00:17:56.714 "code": -32602, 00:17:56.714 "message": "Invalid SN T$H:%zkRfq{&Jr3mdl]pu" 00:17:56.714 }' 00:17:56.714 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:56.714 { 00:17:56.714 "nqn": "nqn.2016-06.io.spdk:cnode12956", 00:17:56.714 "serial_number": "T$H:%zkRfq{&Jr3mdl]pu", 00:17:56.714 "method": "nvmf_create_subsystem", 00:17:56.714 "req_id": 1 00:17:56.714 } 00:17:56.714 Got JSON-RPC error response 00:17:56.714 response: 00:17:56.714 { 00:17:56.714 "code": -32602, 00:17:56.714 "message": "Invalid SN T$H:%zkRfq{&Jr3mdl]pu" 00:17:56.714 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:56.714 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:56.714 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:56.714 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:56.714 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:56.714 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:56.714 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:56.714 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.714 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:17:56.714 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:56.714 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:17:56.714 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.714 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.714 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:56.714 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:56.714 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:56.714 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.714 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.714 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:17:56.714 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:17:56.714 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:17:56.714 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:56.715 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.975 03:28:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.975 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:17:56.975 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:17:56.975 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:17:56.975 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.975 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.975 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:17:56.975 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:17:56.975 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:17:56.975 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.975 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.975 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:17:56.975 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:17:56.975 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:17:56.975 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.975 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.975 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:17:56.975 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:17:56.975 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:17:56.975 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.975 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.975 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:17:56.975 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:17:56.975 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:17:56.976 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.976 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.976 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:17:56.976 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:17:56.976 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:17:56.976 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:56.976 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:56.976 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ [ == \- ]] 00:17:56.976 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '[%VU:4&d(@wXp!}MiOarTG['\''7PRm`Rxsu]/+wDS' 00:17:56.976 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '[%VU:4&d(@wXp!}MiOarTG['\''7PRm`Rxsu]/+wDS' nqn.2016-06.io.spdk:cnode5668 00:17:57.234 [2024-12-13 03:28:58.222880] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5668: invalid model number '[%VU:4&d(@wXp!}MiOarTG['7PRm`Rxsu]/+wDS' 00:17:57.234 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:57.234 { 00:17:57.234 "nqn": "nqn.2016-06.io.spdk:cnode5668", 00:17:57.234 "model_number": "[%VU:\u007f4&d(@wXp!}MiOarTG['\''7PR\u007fm`Rxsu]/+wDS", 00:17:57.234 "method": "nvmf_create_subsystem", 00:17:57.234 "req_id": 1 00:17:57.234 } 00:17:57.234 Got JSON-RPC error response 00:17:57.234 response: 00:17:57.234 { 00:17:57.234 "code": -32602, 00:17:57.234 "message": "Invalid MN [%VU:\u007f4&d(@wXp!}MiOarTG['\''7PR\u007fm`Rxsu]/+wDS" 00:17:57.234 }' 00:17:57.234 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:57.234 { 00:17:57.234 "nqn": "nqn.2016-06.io.spdk:cnode5668", 00:17:57.234 "model_number": "[%VU:\u007f4&d(@wXp!}MiOarTG['7PR\u007fm`Rxsu]/+wDS", 00:17:57.234 "method": "nvmf_create_subsystem", 00:17:57.234 "req_id": 1 00:17:57.234 } 00:17:57.234 Got JSON-RPC error response 00:17:57.234 response: 00:17:57.234 { 00:17:57.234 "code": -32602, 00:17:57.234 "message": "Invalid MN [%VU:\u007f4&d(@wXp!}MiOarTG['7PR\u007fm`Rxsu]/+wDS" 00:17:57.234 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:57.234 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:57.234 [2024-12-13 03:28:58.435732] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:57.492 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:57.492 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:57.492 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:57.492 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:57.492 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:57.492 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:57.750 [2024-12-13 03:28:58.855559] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:57.750 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:57.750 { 00:17:57.750 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:57.751 "listen_address": { 00:17:57.751 "trtype": "tcp", 00:17:57.751 "traddr": "", 00:17:57.751 "trsvcid": "4421" 00:17:57.751 }, 00:17:57.751 "method": "nvmf_subsystem_remove_listener", 00:17:57.751 "req_id": 1 00:17:57.751 } 00:17:57.751 Got JSON-RPC error response 00:17:57.751 response: 00:17:57.751 { 00:17:57.751 "code": -32602, 00:17:57.751 "message": "Invalid parameters" 00:17:57.751 }' 00:17:57.751 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:57.751 { 00:17:57.751 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:57.751 "listen_address": { 00:17:57.751 "trtype": "tcp", 00:17:57.751 "traddr": "", 00:17:57.751 "trsvcid": "4421" 00:17:57.751 }, 00:17:57.751 "method": "nvmf_subsystem_remove_listener", 00:17:57.751 "req_id": 1 00:17:57.751 } 00:17:57.751 Got JSON-RPC error response 00:17:57.751 response: 00:17:57.751 { 00:17:57.751 "code": -32602, 00:17:57.751 "message": "Invalid parameters" 00:17:57.751 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:57.751 03:28:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24167 -i 0 00:17:58.009 [2024-12-13 03:28:59.044146] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24167: invalid cntlid range [0-65519] 00:17:58.009 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:58.009 { 00:17:58.009 "nqn": "nqn.2016-06.io.spdk:cnode24167", 00:17:58.009 "min_cntlid": 0, 00:17:58.009 "method": "nvmf_create_subsystem", 00:17:58.009 "req_id": 1 00:17:58.009 } 00:17:58.009 Got JSON-RPC error response 00:17:58.009 response: 00:17:58.009 { 00:17:58.009 "code": -32602, 00:17:58.009 "message": "Invalid cntlid range [0-65519]" 00:17:58.009 }' 00:17:58.009 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:58.009 { 00:17:58.009 "nqn": "nqn.2016-06.io.spdk:cnode24167", 00:17:58.009 "min_cntlid": 0, 00:17:58.009 "method": "nvmf_create_subsystem", 00:17:58.009 "req_id": 1 00:17:58.009 } 00:17:58.009 Got JSON-RPC error response 00:17:58.009 response: 00:17:58.009 { 00:17:58.009 "code": -32602, 00:17:58.009 "message": "Invalid cntlid range [0-65519]" 00:17:58.009 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:58.009 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17600 -i 65520 00:17:58.268 [2024-12-13 03:28:59.240829] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17600: invalid cntlid range [65520-65519] 00:17:58.268 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:58.268 { 00:17:58.268 "nqn": "nqn.2016-06.io.spdk:cnode17600", 00:17:58.268 "min_cntlid": 65520, 00:17:58.268 "method": "nvmf_create_subsystem", 00:17:58.268 "req_id": 1 00:17:58.268 } 00:17:58.268 Got JSON-RPC error response 00:17:58.268 response: 00:17:58.268 { 00:17:58.268 "code": -32602, 00:17:58.268 "message": "Invalid cntlid range [65520-65519]" 00:17:58.268 }' 00:17:58.268 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:58.268 { 00:17:58.268 "nqn": "nqn.2016-06.io.spdk:cnode17600", 00:17:58.268 "min_cntlid": 65520, 00:17:58.268 "method": "nvmf_create_subsystem", 00:17:58.268 "req_id": 1 00:17:58.268 } 00:17:58.268 Got JSON-RPC error response 00:17:58.268 response: 00:17:58.268 { 00:17:58.268 "code": -32602, 00:17:58.268 "message": "Invalid cntlid range [65520-65519]" 00:17:58.268 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:58.268 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21434 -I 0 00:17:58.268 [2024-12-13 03:28:59.445547] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21434: invalid cntlid range [1-0] 00:17:58.527 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:58.527 { 00:17:58.527 "nqn": "nqn.2016-06.io.spdk:cnode21434", 00:17:58.527 "max_cntlid": 0, 00:17:58.527 "method": "nvmf_create_subsystem", 00:17:58.527 "req_id": 1 00:17:58.527 } 00:17:58.527 Got JSON-RPC error response 00:17:58.527 response: 00:17:58.527 { 00:17:58.527 "code": -32602, 00:17:58.527 "message": "Invalid cntlid range [1-0]" 00:17:58.527 }' 00:17:58.527 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:58.527 { 00:17:58.527 "nqn": "nqn.2016-06.io.spdk:cnode21434", 00:17:58.527 "max_cntlid": 0, 00:17:58.527 "method": "nvmf_create_subsystem", 00:17:58.527 "req_id": 1 00:17:58.527 } 00:17:58.527 Got JSON-RPC error response 00:17:58.527 response: 00:17:58.527 { 00:17:58.527 "code": -32602, 00:17:58.527 "message": "Invalid cntlid range [1-0]" 00:17:58.527 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:58.527 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17131 -I 65520 00:17:58.527 [2024-12-13 03:28:59.662349] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17131: invalid cntlid range [1-65520] 00:17:58.527 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:58.527 { 00:17:58.527 "nqn": "nqn.2016-06.io.spdk:cnode17131", 00:17:58.527 "max_cntlid": 65520, 00:17:58.527 "method": "nvmf_create_subsystem", 00:17:58.527 "req_id": 1 00:17:58.527 } 00:17:58.527 Got JSON-RPC error response 00:17:58.527 response: 00:17:58.527 { 00:17:58.527 "code": -32602, 00:17:58.527 "message": "Invalid cntlid range [1-65520]" 00:17:58.527 }' 00:17:58.527 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:58.527 { 00:17:58.527 "nqn": "nqn.2016-06.io.spdk:cnode17131", 00:17:58.527 "max_cntlid": 65520, 00:17:58.527 "method": "nvmf_create_subsystem", 00:17:58.527 "req_id": 1 00:17:58.527 } 00:17:58.527 Got JSON-RPC error response 00:17:58.527 response: 00:17:58.527 { 00:17:58.527 "code": -32602, 00:17:58.527 "message": "Invalid cntlid range [1-65520]" 00:17:58.527 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:58.527 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5673 -i 6 -I 5 00:17:58.786 [2024-12-13 03:28:59.867011] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5673: invalid cntlid range [6-5] 00:17:58.786 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:58.786 { 00:17:58.786 "nqn": "nqn.2016-06.io.spdk:cnode5673", 00:17:58.786 "min_cntlid": 6, 00:17:58.786 "max_cntlid": 5, 00:17:58.786 "method": "nvmf_create_subsystem", 00:17:58.786 "req_id": 1 00:17:58.786 } 00:17:58.786 Got JSON-RPC error response 00:17:58.786 response: 00:17:58.786 { 00:17:58.786 "code": -32602, 00:17:58.786 "message": "Invalid cntlid range [6-5]" 00:17:58.786 }' 00:17:58.786 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:58.786 { 00:17:58.786 "nqn": "nqn.2016-06.io.spdk:cnode5673", 00:17:58.786 "min_cntlid": 6, 00:17:58.786 "max_cntlid": 5, 00:17:58.786 "method": "nvmf_create_subsystem", 00:17:58.786 "req_id": 1 00:17:58.786 } 00:17:58.786 Got JSON-RPC error response 00:17:58.786 response: 00:17:58.786 { 00:17:58.786 "code": -32602, 00:17:58.786 "message": "Invalid cntlid range [6-5]" 00:17:58.786 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:58.786 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:59.045 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:59.045 { 00:17:59.045 "name": "foobar", 00:17:59.045 "method": "nvmf_delete_target", 00:17:59.045 "req_id": 1 00:17:59.045 } 00:17:59.045 Got JSON-RPC error response 00:17:59.045 response: 00:17:59.045 { 00:17:59.045 "code": -32602, 00:17:59.045 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:59.045 }' 00:17:59.045 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:59.045 { 00:17:59.045 "name": "foobar", 00:17:59.045 "method": "nvmf_delete_target", 00:17:59.045 "req_id": 1 00:17:59.045 } 00:17:59.045 Got JSON-RPC error response 00:17:59.045 response: 00:17:59.045 { 00:17:59.045 "code": -32602, 00:17:59.045 "message": "The specified target doesn't exist, cannot delete it." 00:17:59.045 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:59.045 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:59.045 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:59.045 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:59.045 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:59.045 03:28:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:59.045 03:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:59.045 03:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:59.045 03:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:59.045 rmmod nvme_tcp 00:17:59.045 rmmod nvme_fabrics 00:17:59.045 rmmod nvme_keyring 00:17:59.045 03:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:59.045 03:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:59.045 03:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:59.045 03:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2645477 ']' 00:17:59.045 03:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2645477 00:17:59.045 03:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2645477 ']' 00:17:59.045 03:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2645477 00:17:59.045 03:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:17:59.045 03:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.045 03:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2645477 00:17:59.045 03:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:59.045 03:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:59.045 03:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2645477' 00:17:59.045 killing process with pid 2645477 00:17:59.045 03:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2645477 00:17:59.045 03:29:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2645477 00:18:00.423 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:00.423 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:00.423 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:00.423 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:18:00.423 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:18:00.423 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:18:00.423 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:00.423 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:00.423 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:00.423 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.423 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.423 03:29:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.329 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:02.329 00:18:02.329 real 0m13.001s 00:18:02.329 user 0m23.907s 00:18:02.329 sys 0m5.022s 00:18:02.329 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:02.329 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:02.329 ************************************ 00:18:02.329 END TEST nvmf_invalid 00:18:02.329 ************************************ 00:18:02.329 03:29:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:02.329 03:29:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:02.329 03:29:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:02.329 03:29:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:02.329 ************************************ 00:18:02.329 START TEST nvmf_connect_stress 00:18:02.329 ************************************ 00:18:02.329 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:02.329 * Looking for test storage... 00:18:02.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:02.329 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:02.329 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:18:02.329 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:02.329 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:02.329 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:02.329 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:02.329 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:02.329 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:18:02.329 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:18:02.329 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:18:02.329 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:18:02.329 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:18:02.329 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:18:02.329 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:18:02.329 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:02.329 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:18:02.329 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:18:02.329 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:02.329 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.329 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:18:02.329 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:18:02.329 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:02.329 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:18:02.329 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:18:02.329 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:18:02.330 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:18:02.589 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:02.589 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:18:02.589 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:18:02.589 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:02.589 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:02.589 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:18:02.589 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:02.589 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:02.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.589 --rc genhtml_branch_coverage=1 00:18:02.589 --rc genhtml_function_coverage=1 00:18:02.589 --rc genhtml_legend=1 00:18:02.589 --rc geninfo_all_blocks=1 00:18:02.589 --rc geninfo_unexecuted_blocks=1 00:18:02.589 00:18:02.589 ' 00:18:02.589 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:02.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.589 --rc genhtml_branch_coverage=1 00:18:02.589 --rc genhtml_function_coverage=1 00:18:02.589 --rc genhtml_legend=1 00:18:02.589 --rc geninfo_all_blocks=1 00:18:02.589 --rc geninfo_unexecuted_blocks=1 00:18:02.589 00:18:02.589 ' 00:18:02.589 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:02.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.589 --rc genhtml_branch_coverage=1 00:18:02.589 --rc genhtml_function_coverage=1 00:18:02.589 --rc genhtml_legend=1 00:18:02.589 --rc geninfo_all_blocks=1 00:18:02.589 --rc geninfo_unexecuted_blocks=1 00:18:02.589 00:18:02.589 ' 00:18:02.589 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:02.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.589 --rc genhtml_branch_coverage=1 00:18:02.589 --rc genhtml_function_coverage=1 00:18:02.589 --rc genhtml_legend=1 00:18:02.589 --rc geninfo_all_blocks=1 00:18:02.589 --rc geninfo_unexecuted_blocks=1 00:18:02.589 00:18:02.589 ' 00:18:02.589 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:02.589 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:18:02.589 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:02.589 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:02.589 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:02.589 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:02.589 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:02.589 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:02.589 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:02.589 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:02.589 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:02.589 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:02.589 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:02.589 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:02.589 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:02.589 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:02.589 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:02.589 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:02.589 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:02.589 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:18:02.589 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:02.589 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:02.589 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:02.589 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.590 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.590 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.590 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:18:02.590 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.590 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:18:02.590 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:02.590 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:02.590 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:02.590 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:02.590 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:02.590 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:02.590 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:02.590 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:02.590 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:02.590 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:02.590 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:18:02.590 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:02.590 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:02.590 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:02.590 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:02.590 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:02.590 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.590 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:02.590 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.590 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:02.590 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:02.590 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:18:02.590 03:29:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:07.863 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:07.863 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:07.863 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:07.864 Found net devices under 0000:af:00.0: cvl_0_0 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:07.864 Found net devices under 0000:af:00.1: cvl_0_1 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:07.864 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:07.864 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:18:07.864 00:18:07.864 --- 10.0.0.2 ping statistics --- 00:18:07.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.864 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:07.864 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:07.864 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:18:07.864 00:18:07.864 --- 10.0.0.1 ping statistics --- 00:18:07.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.864 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2649790 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2649790 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2649790 ']' 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:07.864 03:29:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:07.864 [2024-12-13 03:29:08.824369] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:07.864 [2024-12-13 03:29:08.824459] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:07.864 [2024-12-13 03:29:08.939707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:07.864 [2024-12-13 03:29:09.050660] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:07.864 [2024-12-13 03:29:09.050698] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:07.864 [2024-12-13 03:29:09.050709] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:07.864 [2024-12-13 03:29:09.050718] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:07.864 [2024-12-13 03:29:09.050726] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:07.864 [2024-12-13 03:29:09.052811] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:07.864 [2024-12-13 03:29:09.052874] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:07.864 [2024-12-13 03:29:09.052884] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:08.432 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:08.432 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:18:08.432 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:08.432 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:08.432 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:08.692 [2024-12-13 03:29:09.669879] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:08.692 [2024-12-13 03:29:09.691599] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:08.692 NULL1 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2650032 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2650032 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:08.692 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.693 03:29:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:08.952 03:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.952 03:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2650032 00:18:08.952 03:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:08.952 03:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.952 03:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:09.519 03:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.519 03:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2650032 00:18:09.519 03:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:09.519 03:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.519 03:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:09.778 03:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.778 03:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2650032 00:18:09.778 03:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:09.778 03:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.778 03:29:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:10.044 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.044 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2650032 00:18:10.044 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:10.044 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.044 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:10.305 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.305 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2650032 00:18:10.305 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:10.305 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.305 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:10.563 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.563 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2650032 00:18:10.563 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:10.563 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.563 03:29:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:11.131 03:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.131 03:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2650032 00:18:11.131 03:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:11.131 03:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.131 03:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:11.390 03:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.390 03:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2650032 00:18:11.390 03:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:11.390 03:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.390 03:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:11.648 03:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.648 03:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2650032 00:18:11.648 03:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:11.648 03:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.648 03:29:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:11.907 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.907 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2650032 00:18:11.907 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:11.908 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.908 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:12.476 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.476 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2650032 00:18:12.476 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:12.476 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.476 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:12.734 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.734 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2650032 00:18:12.734 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:12.734 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.734 03:29:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:12.993 03:29:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.993 03:29:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2650032 00:18:12.993 03:29:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:12.993 03:29:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.993 03:29:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:13.252 03:29:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.252 03:29:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2650032 00:18:13.252 03:29:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:13.252 03:29:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.252 03:29:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:13.511 03:29:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.511 03:29:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2650032 00:18:13.511 03:29:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:13.511 03:29:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.511 03:29:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:14.079 03:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.079 03:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2650032 00:18:14.079 03:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:14.079 03:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.079 03:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:14.338 03:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.338 03:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2650032 00:18:14.338 03:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:14.338 03:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.338 03:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:14.596 03:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.596 03:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2650032 00:18:14.596 03:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:14.596 03:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.596 03:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:14.856 03:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.856 03:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2650032 00:18:14.856 03:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:14.856 03:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.856 03:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:15.424 03:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.424 03:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2650032 00:18:15.424 03:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:15.424 03:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.424 03:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:15.683 03:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.683 03:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2650032 00:18:15.683 03:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:15.683 03:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.683 03:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:15.941 03:29:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.942 03:29:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2650032 00:18:15.942 03:29:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:15.942 03:29:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.942 03:29:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:16.204 03:29:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.204 03:29:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2650032 00:18:16.204 03:29:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:16.204 03:29:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.204 03:29:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:16.507 03:29:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.507 03:29:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2650032 00:18:16.507 03:29:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:16.507 03:29:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.507 03:29:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:17.146 03:29:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.146 03:29:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2650032 00:18:17.146 03:29:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:17.146 03:29:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.146 03:29:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:17.146 03:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.146 03:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2650032 00:18:17.146 03:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:17.146 03:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.146 03:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:17.714 03:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.714 03:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2650032 00:18:17.714 03:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:17.714 03:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.714 03:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:17.973 03:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.973 03:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2650032 00:18:17.973 03:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:17.973 03:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.973 03:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:18.232 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.232 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2650032 00:18:18.233 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:18.233 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.233 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:18.492 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.492 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2650032 00:18:18.492 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:18.492 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.492 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:18.751 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:19.010 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.010 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2650032 00:18:19.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2650032) - No such process 00:18:19.010 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2650032 00:18:19.010 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:19.010 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:19.010 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:18:19.010 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:19.010 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:18:19.010 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:19.010 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:18:19.010 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:19.010 03:29:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:19.010 rmmod nvme_tcp 00:18:19.010 rmmod nvme_fabrics 00:18:19.010 rmmod nvme_keyring 00:18:19.010 03:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:19.010 03:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:18:19.010 03:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:18:19.010 03:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2649790 ']' 00:18:19.010 03:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2649790 00:18:19.010 03:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2649790 ']' 00:18:19.010 03:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2649790 00:18:19.010 03:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:18:19.010 03:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:19.010 03:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2649790 00:18:19.010 03:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:19.010 03:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:19.010 03:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2649790' 00:18:19.010 killing process with pid 2649790 00:18:19.010 03:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2649790 00:18:19.010 03:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2649790 00:18:20.389 03:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:20.389 03:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:20.389 03:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:20.389 03:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:18:20.389 03:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:20.389 03:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:18:20.389 03:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:18:20.389 03:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:20.389 03:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:20.389 03:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.389 03:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:20.389 03:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.296 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:22.296 00:18:22.296 real 0m19.933s 00:18:22.296 user 0m43.883s 00:18:22.296 sys 0m7.671s 00:18:22.296 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:22.296 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:22.296 ************************************ 00:18:22.296 END TEST nvmf_connect_stress 00:18:22.296 ************************************ 00:18:22.296 03:29:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:22.296 03:29:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:22.296 03:29:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:22.296 03:29:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:22.296 ************************************ 00:18:22.296 START TEST nvmf_fused_ordering 00:18:22.296 ************************************ 00:18:22.296 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:22.296 * Looking for test storage... 00:18:22.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:22.296 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:22.296 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:18:22.296 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:22.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.558 --rc genhtml_branch_coverage=1 00:18:22.558 --rc genhtml_function_coverage=1 00:18:22.558 --rc genhtml_legend=1 00:18:22.558 --rc geninfo_all_blocks=1 00:18:22.558 --rc geninfo_unexecuted_blocks=1 00:18:22.558 00:18:22.558 ' 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:22.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.558 --rc genhtml_branch_coverage=1 00:18:22.558 --rc genhtml_function_coverage=1 00:18:22.558 --rc genhtml_legend=1 00:18:22.558 --rc geninfo_all_blocks=1 00:18:22.558 --rc geninfo_unexecuted_blocks=1 00:18:22.558 00:18:22.558 ' 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:22.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.558 --rc genhtml_branch_coverage=1 00:18:22.558 --rc genhtml_function_coverage=1 00:18:22.558 --rc genhtml_legend=1 00:18:22.558 --rc geninfo_all_blocks=1 00:18:22.558 --rc geninfo_unexecuted_blocks=1 00:18:22.558 00:18:22.558 ' 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:22.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.558 --rc genhtml_branch_coverage=1 00:18:22.558 --rc genhtml_function_coverage=1 00:18:22.558 --rc genhtml_legend=1 00:18:22.558 --rc geninfo_all_blocks=1 00:18:22.558 --rc geninfo_unexecuted_blocks=1 00:18:22.558 00:18:22.558 ' 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:22.558 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:22.559 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:22.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:22.559 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:22.559 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:22.559 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:22.559 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:18:22.559 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:22.559 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:22.559 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:22.559 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:22.559 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:22.559 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.559 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:22.559 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.559 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:22.559 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:22.559 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:18:22.559 03:29:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:27.827 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:27.827 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:27.827 Found net devices under 0000:af:00.0: cvl_0_0 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:27.827 Found net devices under 0000:af:00.1: cvl_0_1 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:27.827 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:27.828 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:27.828 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:27.828 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:27.828 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:27.828 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:27.828 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:27.828 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:27.828 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:27.828 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:27.828 03:29:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:27.828 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:27.828 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:18:27.828 00:18:27.828 --- 10.0.0.2 ping statistics --- 00:18:27.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.828 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:18:27.828 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:27.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:27.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:18:27.828 00:18:27.828 --- 10.0.0.1 ping statistics --- 00:18:27.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.828 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:18:27.828 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:27.828 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:18:27.828 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:27.828 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:27.828 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:27.828 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:27.828 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:27.828 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:27.828 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:28.087 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:28.087 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:28.087 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:28.087 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:28.087 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2655309 00:18:28.087 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:28.087 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2655309 00:18:28.087 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2655309 ']' 00:18:28.087 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.087 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.087 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.087 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.087 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:28.087 [2024-12-13 03:29:29.136583] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:28.087 [2024-12-13 03:29:29.136673] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.087 [2024-12-13 03:29:29.254353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.345 [2024-12-13 03:29:29.357430] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:28.345 [2024-12-13 03:29:29.357471] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:28.345 [2024-12-13 03:29:29.357481] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:28.345 [2024-12-13 03:29:29.357490] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:28.345 [2024-12-13 03:29:29.357498] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:28.345 [2024-12-13 03:29:29.358715] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.912 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:28.912 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:18:28.912 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:28.912 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:28.912 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:28.912 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.912 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:28.912 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.912 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:28.912 [2024-12-13 03:29:29.975475] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:28.912 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.912 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:28.912 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.912 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:28.912 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.912 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:28.912 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.912 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:28.912 [2024-12-13 03:29:29.991627] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.912 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.912 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:28.912 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.912 03:29:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:28.912 NULL1 00:18:28.912 03:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.912 03:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:28.912 03:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.912 03:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:28.912 03:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.912 03:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:28.912 03:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.912 03:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:28.912 03:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.912 03:29:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:28.912 [2024-12-13 03:29:30.064578] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:28.912 [2024-12-13 03:29:30.064666] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2655478 ] 00:18:29.479 Attached to nqn.2016-06.io.spdk:cnode1 00:18:29.479 Namespace ID: 1 size: 1GB 00:18:29.479 fused_ordering(0) 00:18:29.479 fused_ordering(1) 00:18:29.479 fused_ordering(2) 00:18:29.479 fused_ordering(3) 00:18:29.479 fused_ordering(4) 00:18:29.479 fused_ordering(5) 00:18:29.479 fused_ordering(6) 00:18:29.479 fused_ordering(7) 00:18:29.479 fused_ordering(8) 00:18:29.479 fused_ordering(9) 00:18:29.479 fused_ordering(10) 00:18:29.479 fused_ordering(11) 00:18:29.479 fused_ordering(12) 00:18:29.479 fused_ordering(13) 00:18:29.479 fused_ordering(14) 00:18:29.479 fused_ordering(15) 00:18:29.479 fused_ordering(16) 00:18:29.479 fused_ordering(17) 00:18:29.479 fused_ordering(18) 00:18:29.479 fused_ordering(19) 00:18:29.479 fused_ordering(20) 00:18:29.479 fused_ordering(21) 00:18:29.479 fused_ordering(22) 00:18:29.479 fused_ordering(23) 00:18:29.479 fused_ordering(24) 00:18:29.479 fused_ordering(25) 00:18:29.479 fused_ordering(26) 00:18:29.479 fused_ordering(27) 00:18:29.479 fused_ordering(28) 00:18:29.479 fused_ordering(29) 00:18:29.479 fused_ordering(30) 00:18:29.479 fused_ordering(31) 00:18:29.479 fused_ordering(32) 00:18:29.479 fused_ordering(33) 00:18:29.479 fused_ordering(34) 00:18:29.479 fused_ordering(35) 00:18:29.479 fused_ordering(36) 00:18:29.479 fused_ordering(37) 00:18:29.479 fused_ordering(38) 00:18:29.479 fused_ordering(39) 00:18:29.479 fused_ordering(40) 00:18:29.479 fused_ordering(41) 00:18:29.479 fused_ordering(42) 00:18:29.479 fused_ordering(43) 00:18:29.479 fused_ordering(44) 00:18:29.479 fused_ordering(45) 00:18:29.479 fused_ordering(46) 00:18:29.479 fused_ordering(47) 00:18:29.479 fused_ordering(48) 00:18:29.479 fused_ordering(49) 00:18:29.479 fused_ordering(50) 00:18:29.479 fused_ordering(51) 00:18:29.479 fused_ordering(52) 00:18:29.479 fused_ordering(53) 00:18:29.479 fused_ordering(54) 00:18:29.479 fused_ordering(55) 00:18:29.479 fused_ordering(56) 00:18:29.479 fused_ordering(57) 00:18:29.479 fused_ordering(58) 00:18:29.479 fused_ordering(59) 00:18:29.479 fused_ordering(60) 00:18:29.479 fused_ordering(61) 00:18:29.479 fused_ordering(62) 00:18:29.479 fused_ordering(63) 00:18:29.479 fused_ordering(64) 00:18:29.479 fused_ordering(65) 00:18:29.479 fused_ordering(66) 00:18:29.479 fused_ordering(67) 00:18:29.479 fused_ordering(68) 00:18:29.479 fused_ordering(69) 00:18:29.479 fused_ordering(70) 00:18:29.479 fused_ordering(71) 00:18:29.479 fused_ordering(72) 00:18:29.479 fused_ordering(73) 00:18:29.479 fused_ordering(74) 00:18:29.479 fused_ordering(75) 00:18:29.479 fused_ordering(76) 00:18:29.479 fused_ordering(77) 00:18:29.479 fused_ordering(78) 00:18:29.479 fused_ordering(79) 00:18:29.479 fused_ordering(80) 00:18:29.479 fused_ordering(81) 00:18:29.479 fused_ordering(82) 00:18:29.479 fused_ordering(83) 00:18:29.479 fused_ordering(84) 00:18:29.479 fused_ordering(85) 00:18:29.479 fused_ordering(86) 00:18:29.479 fused_ordering(87) 00:18:29.479 fused_ordering(88) 00:18:29.479 fused_ordering(89) 00:18:29.479 fused_ordering(90) 00:18:29.479 fused_ordering(91) 00:18:29.479 fused_ordering(92) 00:18:29.479 fused_ordering(93) 00:18:29.479 fused_ordering(94) 00:18:29.479 fused_ordering(95) 00:18:29.479 fused_ordering(96) 00:18:29.479 fused_ordering(97) 00:18:29.479 fused_ordering(98) 00:18:29.479 fused_ordering(99) 00:18:29.479 fused_ordering(100) 00:18:29.479 fused_ordering(101) 00:18:29.479 fused_ordering(102) 00:18:29.479 fused_ordering(103) 00:18:29.479 fused_ordering(104) 00:18:29.479 fused_ordering(105) 00:18:29.479 fused_ordering(106) 00:18:29.479 fused_ordering(107) 00:18:29.479 fused_ordering(108) 00:18:29.479 fused_ordering(109) 00:18:29.479 fused_ordering(110) 00:18:29.479 fused_ordering(111) 00:18:29.479 fused_ordering(112) 00:18:29.479 fused_ordering(113) 00:18:29.479 fused_ordering(114) 00:18:29.479 fused_ordering(115) 00:18:29.479 fused_ordering(116) 00:18:29.479 fused_ordering(117) 00:18:29.479 fused_ordering(118) 00:18:29.479 fused_ordering(119) 00:18:29.479 fused_ordering(120) 00:18:29.479 fused_ordering(121) 00:18:29.479 fused_ordering(122) 00:18:29.479 fused_ordering(123) 00:18:29.479 fused_ordering(124) 00:18:29.479 fused_ordering(125) 00:18:29.479 fused_ordering(126) 00:18:29.479 fused_ordering(127) 00:18:29.479 fused_ordering(128) 00:18:29.479 fused_ordering(129) 00:18:29.479 fused_ordering(130) 00:18:29.479 fused_ordering(131) 00:18:29.479 fused_ordering(132) 00:18:29.479 fused_ordering(133) 00:18:29.479 fused_ordering(134) 00:18:29.479 fused_ordering(135) 00:18:29.479 fused_ordering(136) 00:18:29.479 fused_ordering(137) 00:18:29.479 fused_ordering(138) 00:18:29.479 fused_ordering(139) 00:18:29.479 fused_ordering(140) 00:18:29.479 fused_ordering(141) 00:18:29.479 fused_ordering(142) 00:18:29.479 fused_ordering(143) 00:18:29.479 fused_ordering(144) 00:18:29.479 fused_ordering(145) 00:18:29.479 fused_ordering(146) 00:18:29.479 fused_ordering(147) 00:18:29.480 fused_ordering(148) 00:18:29.480 fused_ordering(149) 00:18:29.480 fused_ordering(150) 00:18:29.480 fused_ordering(151) 00:18:29.480 fused_ordering(152) 00:18:29.480 fused_ordering(153) 00:18:29.480 fused_ordering(154) 00:18:29.480 fused_ordering(155) 00:18:29.480 fused_ordering(156) 00:18:29.480 fused_ordering(157) 00:18:29.480 fused_ordering(158) 00:18:29.480 fused_ordering(159) 00:18:29.480 fused_ordering(160) 00:18:29.480 fused_ordering(161) 00:18:29.480 fused_ordering(162) 00:18:29.480 fused_ordering(163) 00:18:29.480 fused_ordering(164) 00:18:29.480 fused_ordering(165) 00:18:29.480 fused_ordering(166) 00:18:29.480 fused_ordering(167) 00:18:29.480 fused_ordering(168) 00:18:29.480 fused_ordering(169) 00:18:29.480 fused_ordering(170) 00:18:29.480 fused_ordering(171) 00:18:29.480 fused_ordering(172) 00:18:29.480 fused_ordering(173) 00:18:29.480 fused_ordering(174) 00:18:29.480 fused_ordering(175) 00:18:29.480 fused_ordering(176) 00:18:29.480 fused_ordering(177) 00:18:29.480 fused_ordering(178) 00:18:29.480 fused_ordering(179) 00:18:29.480 fused_ordering(180) 00:18:29.480 fused_ordering(181) 00:18:29.480 fused_ordering(182) 00:18:29.480 fused_ordering(183) 00:18:29.480 fused_ordering(184) 00:18:29.480 fused_ordering(185) 00:18:29.480 fused_ordering(186) 00:18:29.480 fused_ordering(187) 00:18:29.480 fused_ordering(188) 00:18:29.480 fused_ordering(189) 00:18:29.480 fused_ordering(190) 00:18:29.480 fused_ordering(191) 00:18:29.480 fused_ordering(192) 00:18:29.480 fused_ordering(193) 00:18:29.480 fused_ordering(194) 00:18:29.480 fused_ordering(195) 00:18:29.480 fused_ordering(196) 00:18:29.480 fused_ordering(197) 00:18:29.480 fused_ordering(198) 00:18:29.480 fused_ordering(199) 00:18:29.480 fused_ordering(200) 00:18:29.480 fused_ordering(201) 00:18:29.480 fused_ordering(202) 00:18:29.480 fused_ordering(203) 00:18:29.480 fused_ordering(204) 00:18:29.480 fused_ordering(205) 00:18:29.738 fused_ordering(206) 00:18:29.738 fused_ordering(207) 00:18:29.738 fused_ordering(208) 00:18:29.738 fused_ordering(209) 00:18:29.738 fused_ordering(210) 00:18:29.738 fused_ordering(211) 00:18:29.738 fused_ordering(212) 00:18:29.738 fused_ordering(213) 00:18:29.738 fused_ordering(214) 00:18:29.738 fused_ordering(215) 00:18:29.738 fused_ordering(216) 00:18:29.738 fused_ordering(217) 00:18:29.738 fused_ordering(218) 00:18:29.738 fused_ordering(219) 00:18:29.738 fused_ordering(220) 00:18:29.738 fused_ordering(221) 00:18:29.738 fused_ordering(222) 00:18:29.739 fused_ordering(223) 00:18:29.739 fused_ordering(224) 00:18:29.739 fused_ordering(225) 00:18:29.739 fused_ordering(226) 00:18:29.739 fused_ordering(227) 00:18:29.739 fused_ordering(228) 00:18:29.739 fused_ordering(229) 00:18:29.739 fused_ordering(230) 00:18:29.739 fused_ordering(231) 00:18:29.739 fused_ordering(232) 00:18:29.739 fused_ordering(233) 00:18:29.739 fused_ordering(234) 00:18:29.739 fused_ordering(235) 00:18:29.739 fused_ordering(236) 00:18:29.739 fused_ordering(237) 00:18:29.739 fused_ordering(238) 00:18:29.739 fused_ordering(239) 00:18:29.739 fused_ordering(240) 00:18:29.739 fused_ordering(241) 00:18:29.739 fused_ordering(242) 00:18:29.739 fused_ordering(243) 00:18:29.739 fused_ordering(244) 00:18:29.739 fused_ordering(245) 00:18:29.739 fused_ordering(246) 00:18:29.739 fused_ordering(247) 00:18:29.739 fused_ordering(248) 00:18:29.739 fused_ordering(249) 00:18:29.739 fused_ordering(250) 00:18:29.739 fused_ordering(251) 00:18:29.739 fused_ordering(252) 00:18:29.739 fused_ordering(253) 00:18:29.739 fused_ordering(254) 00:18:29.739 fused_ordering(255) 00:18:29.739 fused_ordering(256) 00:18:29.739 fused_ordering(257) 00:18:29.739 fused_ordering(258) 00:18:29.739 fused_ordering(259) 00:18:29.739 fused_ordering(260) 00:18:29.739 fused_ordering(261) 00:18:29.739 fused_ordering(262) 00:18:29.739 fused_ordering(263) 00:18:29.739 fused_ordering(264) 00:18:29.739 fused_ordering(265) 00:18:29.739 fused_ordering(266) 00:18:29.739 fused_ordering(267) 00:18:29.739 fused_ordering(268) 00:18:29.739 fused_ordering(269) 00:18:29.739 fused_ordering(270) 00:18:29.739 fused_ordering(271) 00:18:29.739 fused_ordering(272) 00:18:29.739 fused_ordering(273) 00:18:29.739 fused_ordering(274) 00:18:29.739 fused_ordering(275) 00:18:29.739 fused_ordering(276) 00:18:29.739 fused_ordering(277) 00:18:29.739 fused_ordering(278) 00:18:29.739 fused_ordering(279) 00:18:29.739 fused_ordering(280) 00:18:29.739 fused_ordering(281) 00:18:29.739 fused_ordering(282) 00:18:29.739 fused_ordering(283) 00:18:29.739 fused_ordering(284) 00:18:29.739 fused_ordering(285) 00:18:29.739 fused_ordering(286) 00:18:29.739 fused_ordering(287) 00:18:29.739 fused_ordering(288) 00:18:29.739 fused_ordering(289) 00:18:29.739 fused_ordering(290) 00:18:29.739 fused_ordering(291) 00:18:29.739 fused_ordering(292) 00:18:29.739 fused_ordering(293) 00:18:29.739 fused_ordering(294) 00:18:29.739 fused_ordering(295) 00:18:29.739 fused_ordering(296) 00:18:29.739 fused_ordering(297) 00:18:29.739 fused_ordering(298) 00:18:29.739 fused_ordering(299) 00:18:29.739 fused_ordering(300) 00:18:29.739 fused_ordering(301) 00:18:29.739 fused_ordering(302) 00:18:29.739 fused_ordering(303) 00:18:29.739 fused_ordering(304) 00:18:29.739 fused_ordering(305) 00:18:29.739 fused_ordering(306) 00:18:29.739 fused_ordering(307) 00:18:29.739 fused_ordering(308) 00:18:29.739 fused_ordering(309) 00:18:29.739 fused_ordering(310) 00:18:29.739 fused_ordering(311) 00:18:29.739 fused_ordering(312) 00:18:29.739 fused_ordering(313) 00:18:29.739 fused_ordering(314) 00:18:29.739 fused_ordering(315) 00:18:29.739 fused_ordering(316) 00:18:29.739 fused_ordering(317) 00:18:29.739 fused_ordering(318) 00:18:29.739 fused_ordering(319) 00:18:29.739 fused_ordering(320) 00:18:29.739 fused_ordering(321) 00:18:29.739 fused_ordering(322) 00:18:29.739 fused_ordering(323) 00:18:29.739 fused_ordering(324) 00:18:29.739 fused_ordering(325) 00:18:29.739 fused_ordering(326) 00:18:29.739 fused_ordering(327) 00:18:29.739 fused_ordering(328) 00:18:29.739 fused_ordering(329) 00:18:29.739 fused_ordering(330) 00:18:29.739 fused_ordering(331) 00:18:29.739 fused_ordering(332) 00:18:29.739 fused_ordering(333) 00:18:29.739 fused_ordering(334) 00:18:29.739 fused_ordering(335) 00:18:29.739 fused_ordering(336) 00:18:29.739 fused_ordering(337) 00:18:29.739 fused_ordering(338) 00:18:29.739 fused_ordering(339) 00:18:29.739 fused_ordering(340) 00:18:29.739 fused_ordering(341) 00:18:29.739 fused_ordering(342) 00:18:29.739 fused_ordering(343) 00:18:29.739 fused_ordering(344) 00:18:29.739 fused_ordering(345) 00:18:29.739 fused_ordering(346) 00:18:29.739 fused_ordering(347) 00:18:29.739 fused_ordering(348) 00:18:29.739 fused_ordering(349) 00:18:29.739 fused_ordering(350) 00:18:29.739 fused_ordering(351) 00:18:29.739 fused_ordering(352) 00:18:29.739 fused_ordering(353) 00:18:29.739 fused_ordering(354) 00:18:29.739 fused_ordering(355) 00:18:29.739 fused_ordering(356) 00:18:29.739 fused_ordering(357) 00:18:29.739 fused_ordering(358) 00:18:29.739 fused_ordering(359) 00:18:29.739 fused_ordering(360) 00:18:29.739 fused_ordering(361) 00:18:29.739 fused_ordering(362) 00:18:29.739 fused_ordering(363) 00:18:29.739 fused_ordering(364) 00:18:29.739 fused_ordering(365) 00:18:29.739 fused_ordering(366) 00:18:29.739 fused_ordering(367) 00:18:29.739 fused_ordering(368) 00:18:29.739 fused_ordering(369) 00:18:29.739 fused_ordering(370) 00:18:29.739 fused_ordering(371) 00:18:29.739 fused_ordering(372) 00:18:29.739 fused_ordering(373) 00:18:29.739 fused_ordering(374) 00:18:29.739 fused_ordering(375) 00:18:29.739 fused_ordering(376) 00:18:29.739 fused_ordering(377) 00:18:29.739 fused_ordering(378) 00:18:29.739 fused_ordering(379) 00:18:29.739 fused_ordering(380) 00:18:29.739 fused_ordering(381) 00:18:29.739 fused_ordering(382) 00:18:29.739 fused_ordering(383) 00:18:29.739 fused_ordering(384) 00:18:29.739 fused_ordering(385) 00:18:29.739 fused_ordering(386) 00:18:29.739 fused_ordering(387) 00:18:29.739 fused_ordering(388) 00:18:29.739 fused_ordering(389) 00:18:29.739 fused_ordering(390) 00:18:29.739 fused_ordering(391) 00:18:29.739 fused_ordering(392) 00:18:29.739 fused_ordering(393) 00:18:29.739 fused_ordering(394) 00:18:29.739 fused_ordering(395) 00:18:29.739 fused_ordering(396) 00:18:29.739 fused_ordering(397) 00:18:29.739 fused_ordering(398) 00:18:29.739 fused_ordering(399) 00:18:29.739 fused_ordering(400) 00:18:29.739 fused_ordering(401) 00:18:29.739 fused_ordering(402) 00:18:29.739 fused_ordering(403) 00:18:29.739 fused_ordering(404) 00:18:29.739 fused_ordering(405) 00:18:29.739 fused_ordering(406) 00:18:29.739 fused_ordering(407) 00:18:29.739 fused_ordering(408) 00:18:29.739 fused_ordering(409) 00:18:29.739 fused_ordering(410) 00:18:30.305 fused_ordering(411) 00:18:30.305 fused_ordering(412) 00:18:30.305 fused_ordering(413) 00:18:30.305 fused_ordering(414) 00:18:30.305 fused_ordering(415) 00:18:30.305 fused_ordering(416) 00:18:30.305 fused_ordering(417) 00:18:30.305 fused_ordering(418) 00:18:30.305 fused_ordering(419) 00:18:30.305 fused_ordering(420) 00:18:30.305 fused_ordering(421) 00:18:30.305 fused_ordering(422) 00:18:30.305 fused_ordering(423) 00:18:30.305 fused_ordering(424) 00:18:30.305 fused_ordering(425) 00:18:30.305 fused_ordering(426) 00:18:30.305 fused_ordering(427) 00:18:30.305 fused_ordering(428) 00:18:30.305 fused_ordering(429) 00:18:30.305 fused_ordering(430) 00:18:30.305 fused_ordering(431) 00:18:30.305 fused_ordering(432) 00:18:30.305 fused_ordering(433) 00:18:30.305 fused_ordering(434) 00:18:30.305 fused_ordering(435) 00:18:30.305 fused_ordering(436) 00:18:30.305 fused_ordering(437) 00:18:30.305 fused_ordering(438) 00:18:30.305 fused_ordering(439) 00:18:30.305 fused_ordering(440) 00:18:30.305 fused_ordering(441) 00:18:30.305 fused_ordering(442) 00:18:30.305 fused_ordering(443) 00:18:30.305 fused_ordering(444) 00:18:30.305 fused_ordering(445) 00:18:30.305 fused_ordering(446) 00:18:30.305 fused_ordering(447) 00:18:30.305 fused_ordering(448) 00:18:30.305 fused_ordering(449) 00:18:30.305 fused_ordering(450) 00:18:30.305 fused_ordering(451) 00:18:30.305 fused_ordering(452) 00:18:30.305 fused_ordering(453) 00:18:30.305 fused_ordering(454) 00:18:30.305 fused_ordering(455) 00:18:30.305 fused_ordering(456) 00:18:30.305 fused_ordering(457) 00:18:30.305 fused_ordering(458) 00:18:30.305 fused_ordering(459) 00:18:30.305 fused_ordering(460) 00:18:30.305 fused_ordering(461) 00:18:30.305 fused_ordering(462) 00:18:30.305 fused_ordering(463) 00:18:30.305 fused_ordering(464) 00:18:30.305 fused_ordering(465) 00:18:30.305 fused_ordering(466) 00:18:30.305 fused_ordering(467) 00:18:30.305 fused_ordering(468) 00:18:30.305 fused_ordering(469) 00:18:30.305 fused_ordering(470) 00:18:30.305 fused_ordering(471) 00:18:30.305 fused_ordering(472) 00:18:30.305 fused_ordering(473) 00:18:30.305 fused_ordering(474) 00:18:30.305 fused_ordering(475) 00:18:30.305 fused_ordering(476) 00:18:30.305 fused_ordering(477) 00:18:30.305 fused_ordering(478) 00:18:30.305 fused_ordering(479) 00:18:30.305 fused_ordering(480) 00:18:30.305 fused_ordering(481) 00:18:30.305 fused_ordering(482) 00:18:30.305 fused_ordering(483) 00:18:30.305 fused_ordering(484) 00:18:30.305 fused_ordering(485) 00:18:30.305 fused_ordering(486) 00:18:30.305 fused_ordering(487) 00:18:30.305 fused_ordering(488) 00:18:30.305 fused_ordering(489) 00:18:30.305 fused_ordering(490) 00:18:30.305 fused_ordering(491) 00:18:30.305 fused_ordering(492) 00:18:30.305 fused_ordering(493) 00:18:30.305 fused_ordering(494) 00:18:30.305 fused_ordering(495) 00:18:30.305 fused_ordering(496) 00:18:30.305 fused_ordering(497) 00:18:30.305 fused_ordering(498) 00:18:30.305 fused_ordering(499) 00:18:30.305 fused_ordering(500) 00:18:30.305 fused_ordering(501) 00:18:30.305 fused_ordering(502) 00:18:30.305 fused_ordering(503) 00:18:30.305 fused_ordering(504) 00:18:30.305 fused_ordering(505) 00:18:30.305 fused_ordering(506) 00:18:30.305 fused_ordering(507) 00:18:30.305 fused_ordering(508) 00:18:30.305 fused_ordering(509) 00:18:30.305 fused_ordering(510) 00:18:30.305 fused_ordering(511) 00:18:30.305 fused_ordering(512) 00:18:30.305 fused_ordering(513) 00:18:30.305 fused_ordering(514) 00:18:30.305 fused_ordering(515) 00:18:30.305 fused_ordering(516) 00:18:30.305 fused_ordering(517) 00:18:30.305 fused_ordering(518) 00:18:30.305 fused_ordering(519) 00:18:30.305 fused_ordering(520) 00:18:30.305 fused_ordering(521) 00:18:30.305 fused_ordering(522) 00:18:30.305 fused_ordering(523) 00:18:30.305 fused_ordering(524) 00:18:30.305 fused_ordering(525) 00:18:30.305 fused_ordering(526) 00:18:30.305 fused_ordering(527) 00:18:30.305 fused_ordering(528) 00:18:30.305 fused_ordering(529) 00:18:30.305 fused_ordering(530) 00:18:30.305 fused_ordering(531) 00:18:30.305 fused_ordering(532) 00:18:30.305 fused_ordering(533) 00:18:30.305 fused_ordering(534) 00:18:30.305 fused_ordering(535) 00:18:30.305 fused_ordering(536) 00:18:30.305 fused_ordering(537) 00:18:30.305 fused_ordering(538) 00:18:30.305 fused_ordering(539) 00:18:30.305 fused_ordering(540) 00:18:30.306 fused_ordering(541) 00:18:30.306 fused_ordering(542) 00:18:30.306 fused_ordering(543) 00:18:30.306 fused_ordering(544) 00:18:30.306 fused_ordering(545) 00:18:30.306 fused_ordering(546) 00:18:30.306 fused_ordering(547) 00:18:30.306 fused_ordering(548) 00:18:30.306 fused_ordering(549) 00:18:30.306 fused_ordering(550) 00:18:30.306 fused_ordering(551) 00:18:30.306 fused_ordering(552) 00:18:30.306 fused_ordering(553) 00:18:30.306 fused_ordering(554) 00:18:30.306 fused_ordering(555) 00:18:30.306 fused_ordering(556) 00:18:30.306 fused_ordering(557) 00:18:30.306 fused_ordering(558) 00:18:30.306 fused_ordering(559) 00:18:30.306 fused_ordering(560) 00:18:30.306 fused_ordering(561) 00:18:30.306 fused_ordering(562) 00:18:30.306 fused_ordering(563) 00:18:30.306 fused_ordering(564) 00:18:30.306 fused_ordering(565) 00:18:30.306 fused_ordering(566) 00:18:30.306 fused_ordering(567) 00:18:30.306 fused_ordering(568) 00:18:30.306 fused_ordering(569) 00:18:30.306 fused_ordering(570) 00:18:30.306 fused_ordering(571) 00:18:30.306 fused_ordering(572) 00:18:30.306 fused_ordering(573) 00:18:30.306 fused_ordering(574) 00:18:30.306 fused_ordering(575) 00:18:30.306 fused_ordering(576) 00:18:30.306 fused_ordering(577) 00:18:30.306 fused_ordering(578) 00:18:30.306 fused_ordering(579) 00:18:30.306 fused_ordering(580) 00:18:30.306 fused_ordering(581) 00:18:30.306 fused_ordering(582) 00:18:30.306 fused_ordering(583) 00:18:30.306 fused_ordering(584) 00:18:30.306 fused_ordering(585) 00:18:30.306 fused_ordering(586) 00:18:30.306 fused_ordering(587) 00:18:30.306 fused_ordering(588) 00:18:30.306 fused_ordering(589) 00:18:30.306 fused_ordering(590) 00:18:30.306 fused_ordering(591) 00:18:30.306 fused_ordering(592) 00:18:30.306 fused_ordering(593) 00:18:30.306 fused_ordering(594) 00:18:30.306 fused_ordering(595) 00:18:30.306 fused_ordering(596) 00:18:30.306 fused_ordering(597) 00:18:30.306 fused_ordering(598) 00:18:30.306 fused_ordering(599) 00:18:30.306 fused_ordering(600) 00:18:30.306 fused_ordering(601) 00:18:30.306 fused_ordering(602) 00:18:30.306 fused_ordering(603) 00:18:30.306 fused_ordering(604) 00:18:30.306 fused_ordering(605) 00:18:30.306 fused_ordering(606) 00:18:30.306 fused_ordering(607) 00:18:30.306 fused_ordering(608) 00:18:30.306 fused_ordering(609) 00:18:30.306 fused_ordering(610) 00:18:30.306 fused_ordering(611) 00:18:30.306 fused_ordering(612) 00:18:30.306 fused_ordering(613) 00:18:30.306 fused_ordering(614) 00:18:30.306 fused_ordering(615) 00:18:30.564 fused_ordering(616) 00:18:30.564 fused_ordering(617) 00:18:30.564 fused_ordering(618) 00:18:30.564 fused_ordering(619) 00:18:30.564 fused_ordering(620) 00:18:30.564 fused_ordering(621) 00:18:30.564 fused_ordering(622) 00:18:30.564 fused_ordering(623) 00:18:30.564 fused_ordering(624) 00:18:30.564 fused_ordering(625) 00:18:30.564 fused_ordering(626) 00:18:30.564 fused_ordering(627) 00:18:30.564 fused_ordering(628) 00:18:30.564 fused_ordering(629) 00:18:30.564 fused_ordering(630) 00:18:30.564 fused_ordering(631) 00:18:30.564 fused_ordering(632) 00:18:30.564 fused_ordering(633) 00:18:30.564 fused_ordering(634) 00:18:30.564 fused_ordering(635) 00:18:30.564 fused_ordering(636) 00:18:30.564 fused_ordering(637) 00:18:30.564 fused_ordering(638) 00:18:30.564 fused_ordering(639) 00:18:30.564 fused_ordering(640) 00:18:30.564 fused_ordering(641) 00:18:30.564 fused_ordering(642) 00:18:30.564 fused_ordering(643) 00:18:30.564 fused_ordering(644) 00:18:30.564 fused_ordering(645) 00:18:30.564 fused_ordering(646) 00:18:30.564 fused_ordering(647) 00:18:30.564 fused_ordering(648) 00:18:30.564 fused_ordering(649) 00:18:30.564 fused_ordering(650) 00:18:30.564 fused_ordering(651) 00:18:30.564 fused_ordering(652) 00:18:30.564 fused_ordering(653) 00:18:30.564 fused_ordering(654) 00:18:30.564 fused_ordering(655) 00:18:30.564 fused_ordering(656) 00:18:30.564 fused_ordering(657) 00:18:30.564 fused_ordering(658) 00:18:30.564 fused_ordering(659) 00:18:30.564 fused_ordering(660) 00:18:30.564 fused_ordering(661) 00:18:30.564 fused_ordering(662) 00:18:30.564 fused_ordering(663) 00:18:30.564 fused_ordering(664) 00:18:30.564 fused_ordering(665) 00:18:30.564 fused_ordering(666) 00:18:30.564 fused_ordering(667) 00:18:30.564 fused_ordering(668) 00:18:30.564 fused_ordering(669) 00:18:30.564 fused_ordering(670) 00:18:30.564 fused_ordering(671) 00:18:30.564 fused_ordering(672) 00:18:30.564 fused_ordering(673) 00:18:30.564 fused_ordering(674) 00:18:30.564 fused_ordering(675) 00:18:30.564 fused_ordering(676) 00:18:30.564 fused_ordering(677) 00:18:30.564 fused_ordering(678) 00:18:30.564 fused_ordering(679) 00:18:30.564 fused_ordering(680) 00:18:30.564 fused_ordering(681) 00:18:30.564 fused_ordering(682) 00:18:30.564 fused_ordering(683) 00:18:30.564 fused_ordering(684) 00:18:30.564 fused_ordering(685) 00:18:30.564 fused_ordering(686) 00:18:30.565 fused_ordering(687) 00:18:30.565 fused_ordering(688) 00:18:30.565 fused_ordering(689) 00:18:30.565 fused_ordering(690) 00:18:30.565 fused_ordering(691) 00:18:30.565 fused_ordering(692) 00:18:30.565 fused_ordering(693) 00:18:30.565 fused_ordering(694) 00:18:30.565 fused_ordering(695) 00:18:30.565 fused_ordering(696) 00:18:30.565 fused_ordering(697) 00:18:30.565 fused_ordering(698) 00:18:30.565 fused_ordering(699) 00:18:30.565 fused_ordering(700) 00:18:30.565 fused_ordering(701) 00:18:30.565 fused_ordering(702) 00:18:30.565 fused_ordering(703) 00:18:30.565 fused_ordering(704) 00:18:30.565 fused_ordering(705) 00:18:30.565 fused_ordering(706) 00:18:30.565 fused_ordering(707) 00:18:30.565 fused_ordering(708) 00:18:30.565 fused_ordering(709) 00:18:30.565 fused_ordering(710) 00:18:30.565 fused_ordering(711) 00:18:30.565 fused_ordering(712) 00:18:30.565 fused_ordering(713) 00:18:30.565 fused_ordering(714) 00:18:30.565 fused_ordering(715) 00:18:30.565 fused_ordering(716) 00:18:30.565 fused_ordering(717) 00:18:30.565 fused_ordering(718) 00:18:30.565 fused_ordering(719) 00:18:30.565 fused_ordering(720) 00:18:30.565 fused_ordering(721) 00:18:30.565 fused_ordering(722) 00:18:30.565 fused_ordering(723) 00:18:30.565 fused_ordering(724) 00:18:30.565 fused_ordering(725) 00:18:30.565 fused_ordering(726) 00:18:30.565 fused_ordering(727) 00:18:30.565 fused_ordering(728) 00:18:30.565 fused_ordering(729) 00:18:30.565 fused_ordering(730) 00:18:30.565 fused_ordering(731) 00:18:30.565 fused_ordering(732) 00:18:30.565 fused_ordering(733) 00:18:30.565 fused_ordering(734) 00:18:30.565 fused_ordering(735) 00:18:30.565 fused_ordering(736) 00:18:30.565 fused_ordering(737) 00:18:30.565 fused_ordering(738) 00:18:30.565 fused_ordering(739) 00:18:30.565 fused_ordering(740) 00:18:30.565 fused_ordering(741) 00:18:30.565 fused_ordering(742) 00:18:30.565 fused_ordering(743) 00:18:30.565 fused_ordering(744) 00:18:30.565 fused_ordering(745) 00:18:30.565 fused_ordering(746) 00:18:30.565 fused_ordering(747) 00:18:30.565 fused_ordering(748) 00:18:30.565 fused_ordering(749) 00:18:30.565 fused_ordering(750) 00:18:30.565 fused_ordering(751) 00:18:30.565 fused_ordering(752) 00:18:30.565 fused_ordering(753) 00:18:30.565 fused_ordering(754) 00:18:30.565 fused_ordering(755) 00:18:30.565 fused_ordering(756) 00:18:30.565 fused_ordering(757) 00:18:30.565 fused_ordering(758) 00:18:30.565 fused_ordering(759) 00:18:30.565 fused_ordering(760) 00:18:30.565 fused_ordering(761) 00:18:30.565 fused_ordering(762) 00:18:30.565 fused_ordering(763) 00:18:30.565 fused_ordering(764) 00:18:30.565 fused_ordering(765) 00:18:30.565 fused_ordering(766) 00:18:30.565 fused_ordering(767) 00:18:30.565 fused_ordering(768) 00:18:30.565 fused_ordering(769) 00:18:30.565 fused_ordering(770) 00:18:30.565 fused_ordering(771) 00:18:30.565 fused_ordering(772) 00:18:30.565 fused_ordering(773) 00:18:30.565 fused_ordering(774) 00:18:30.565 fused_ordering(775) 00:18:30.565 fused_ordering(776) 00:18:30.565 fused_ordering(777) 00:18:30.565 fused_ordering(778) 00:18:30.565 fused_ordering(779) 00:18:30.565 fused_ordering(780) 00:18:30.565 fused_ordering(781) 00:18:30.565 fused_ordering(782) 00:18:30.565 fused_ordering(783) 00:18:30.565 fused_ordering(784) 00:18:30.565 fused_ordering(785) 00:18:30.565 fused_ordering(786) 00:18:30.565 fused_ordering(787) 00:18:30.565 fused_ordering(788) 00:18:30.565 fused_ordering(789) 00:18:30.565 fused_ordering(790) 00:18:30.565 fused_ordering(791) 00:18:30.565 fused_ordering(792) 00:18:30.565 fused_ordering(793) 00:18:30.565 fused_ordering(794) 00:18:30.565 fused_ordering(795) 00:18:30.565 fused_ordering(796) 00:18:30.565 fused_ordering(797) 00:18:30.565 fused_ordering(798) 00:18:30.565 fused_ordering(799) 00:18:30.565 fused_ordering(800) 00:18:30.565 fused_ordering(801) 00:18:30.565 fused_ordering(802) 00:18:30.565 fused_ordering(803) 00:18:30.565 fused_ordering(804) 00:18:30.565 fused_ordering(805) 00:18:30.565 fused_ordering(806) 00:18:30.565 fused_ordering(807) 00:18:30.565 fused_ordering(808) 00:18:30.565 fused_ordering(809) 00:18:30.565 fused_ordering(810) 00:18:30.565 fused_ordering(811) 00:18:30.565 fused_ordering(812) 00:18:30.565 fused_ordering(813) 00:18:30.565 fused_ordering(814) 00:18:30.565 fused_ordering(815) 00:18:30.565 fused_ordering(816) 00:18:30.565 fused_ordering(817) 00:18:30.565 fused_ordering(818) 00:18:30.565 fused_ordering(819) 00:18:30.565 fused_ordering(820) 00:18:31.132 fused_ordering(821) 00:18:31.132 fused_ordering(822) 00:18:31.132 fused_ordering(823) 00:18:31.132 fused_ordering(824) 00:18:31.132 fused_ordering(825) 00:18:31.132 fused_ordering(826) 00:18:31.132 fused_ordering(827) 00:18:31.132 fused_ordering(828) 00:18:31.132 fused_ordering(829) 00:18:31.132 fused_ordering(830) 00:18:31.132 fused_ordering(831) 00:18:31.132 fused_ordering(832) 00:18:31.132 fused_ordering(833) 00:18:31.132 fused_ordering(834) 00:18:31.132 fused_ordering(835) 00:18:31.132 fused_ordering(836) 00:18:31.132 fused_ordering(837) 00:18:31.132 fused_ordering(838) 00:18:31.132 fused_ordering(839) 00:18:31.132 fused_ordering(840) 00:18:31.132 fused_ordering(841) 00:18:31.132 fused_ordering(842) 00:18:31.132 fused_ordering(843) 00:18:31.132 fused_ordering(844) 00:18:31.132 fused_ordering(845) 00:18:31.132 fused_ordering(846) 00:18:31.132 fused_ordering(847) 00:18:31.132 fused_ordering(848) 00:18:31.132 fused_ordering(849) 00:18:31.132 fused_ordering(850) 00:18:31.132 fused_ordering(851) 00:18:31.132 fused_ordering(852) 00:18:31.132 fused_ordering(853) 00:18:31.132 fused_ordering(854) 00:18:31.132 fused_ordering(855) 00:18:31.132 fused_ordering(856) 00:18:31.132 fused_ordering(857) 00:18:31.132 fused_ordering(858) 00:18:31.132 fused_ordering(859) 00:18:31.132 fused_ordering(860) 00:18:31.132 fused_ordering(861) 00:18:31.132 fused_ordering(862) 00:18:31.132 fused_ordering(863) 00:18:31.132 fused_ordering(864) 00:18:31.132 fused_ordering(865) 00:18:31.132 fused_ordering(866) 00:18:31.132 fused_ordering(867) 00:18:31.132 fused_ordering(868) 00:18:31.132 fused_ordering(869) 00:18:31.132 fused_ordering(870) 00:18:31.132 fused_ordering(871) 00:18:31.132 fused_ordering(872) 00:18:31.132 fused_ordering(873) 00:18:31.132 fused_ordering(874) 00:18:31.132 fused_ordering(875) 00:18:31.132 fused_ordering(876) 00:18:31.132 fused_ordering(877) 00:18:31.132 fused_ordering(878) 00:18:31.132 fused_ordering(879) 00:18:31.132 fused_ordering(880) 00:18:31.132 fused_ordering(881) 00:18:31.132 fused_ordering(882) 00:18:31.132 fused_ordering(883) 00:18:31.132 fused_ordering(884) 00:18:31.132 fused_ordering(885) 00:18:31.132 fused_ordering(886) 00:18:31.132 fused_ordering(887) 00:18:31.132 fused_ordering(888) 00:18:31.132 fused_ordering(889) 00:18:31.132 fused_ordering(890) 00:18:31.132 fused_ordering(891) 00:18:31.132 fused_ordering(892) 00:18:31.132 fused_ordering(893) 00:18:31.132 fused_ordering(894) 00:18:31.132 fused_ordering(895) 00:18:31.132 fused_ordering(896) 00:18:31.132 fused_ordering(897) 00:18:31.132 fused_ordering(898) 00:18:31.132 fused_ordering(899) 00:18:31.132 fused_ordering(900) 00:18:31.132 fused_ordering(901) 00:18:31.132 fused_ordering(902) 00:18:31.132 fused_ordering(903) 00:18:31.132 fused_ordering(904) 00:18:31.132 fused_ordering(905) 00:18:31.132 fused_ordering(906) 00:18:31.132 fused_ordering(907) 00:18:31.132 fused_ordering(908) 00:18:31.132 fused_ordering(909) 00:18:31.132 fused_ordering(910) 00:18:31.132 fused_ordering(911) 00:18:31.132 fused_ordering(912) 00:18:31.132 fused_ordering(913) 00:18:31.132 fused_ordering(914) 00:18:31.132 fused_ordering(915) 00:18:31.132 fused_ordering(916) 00:18:31.132 fused_ordering(917) 00:18:31.132 fused_ordering(918) 00:18:31.132 fused_ordering(919) 00:18:31.132 fused_ordering(920) 00:18:31.132 fused_ordering(921) 00:18:31.132 fused_ordering(922) 00:18:31.132 fused_ordering(923) 00:18:31.132 fused_ordering(924) 00:18:31.132 fused_ordering(925) 00:18:31.132 fused_ordering(926) 00:18:31.132 fused_ordering(927) 00:18:31.132 fused_ordering(928) 00:18:31.132 fused_ordering(929) 00:18:31.132 fused_ordering(930) 00:18:31.132 fused_ordering(931) 00:18:31.132 fused_ordering(932) 00:18:31.132 fused_ordering(933) 00:18:31.132 fused_ordering(934) 00:18:31.132 fused_ordering(935) 00:18:31.132 fused_ordering(936) 00:18:31.132 fused_ordering(937) 00:18:31.132 fused_ordering(938) 00:18:31.132 fused_ordering(939) 00:18:31.132 fused_ordering(940) 00:18:31.132 fused_ordering(941) 00:18:31.132 fused_ordering(942) 00:18:31.132 fused_ordering(943) 00:18:31.132 fused_ordering(944) 00:18:31.132 fused_ordering(945) 00:18:31.132 fused_ordering(946) 00:18:31.132 fused_ordering(947) 00:18:31.132 fused_ordering(948) 00:18:31.132 fused_ordering(949) 00:18:31.132 fused_ordering(950) 00:18:31.132 fused_ordering(951) 00:18:31.132 fused_ordering(952) 00:18:31.132 fused_ordering(953) 00:18:31.132 fused_ordering(954) 00:18:31.132 fused_ordering(955) 00:18:31.132 fused_ordering(956) 00:18:31.132 fused_ordering(957) 00:18:31.132 fused_ordering(958) 00:18:31.132 fused_ordering(959) 00:18:31.132 fused_ordering(960) 00:18:31.132 fused_ordering(961) 00:18:31.132 fused_ordering(962) 00:18:31.132 fused_ordering(963) 00:18:31.132 fused_ordering(964) 00:18:31.132 fused_ordering(965) 00:18:31.132 fused_ordering(966) 00:18:31.132 fused_ordering(967) 00:18:31.132 fused_ordering(968) 00:18:31.132 fused_ordering(969) 00:18:31.132 fused_ordering(970) 00:18:31.132 fused_ordering(971) 00:18:31.132 fused_ordering(972) 00:18:31.132 fused_ordering(973) 00:18:31.132 fused_ordering(974) 00:18:31.132 fused_ordering(975) 00:18:31.132 fused_ordering(976) 00:18:31.132 fused_ordering(977) 00:18:31.132 fused_ordering(978) 00:18:31.132 fused_ordering(979) 00:18:31.132 fused_ordering(980) 00:18:31.132 fused_ordering(981) 00:18:31.132 fused_ordering(982) 00:18:31.132 fused_ordering(983) 00:18:31.132 fused_ordering(984) 00:18:31.132 fused_ordering(985) 00:18:31.132 fused_ordering(986) 00:18:31.132 fused_ordering(987) 00:18:31.132 fused_ordering(988) 00:18:31.132 fused_ordering(989) 00:18:31.132 fused_ordering(990) 00:18:31.132 fused_ordering(991) 00:18:31.132 fused_ordering(992) 00:18:31.132 fused_ordering(993) 00:18:31.132 fused_ordering(994) 00:18:31.132 fused_ordering(995) 00:18:31.132 fused_ordering(996) 00:18:31.132 fused_ordering(997) 00:18:31.132 fused_ordering(998) 00:18:31.132 fused_ordering(999) 00:18:31.132 fused_ordering(1000) 00:18:31.132 fused_ordering(1001) 00:18:31.132 fused_ordering(1002) 00:18:31.132 fused_ordering(1003) 00:18:31.132 fused_ordering(1004) 00:18:31.132 fused_ordering(1005) 00:18:31.132 fused_ordering(1006) 00:18:31.132 fused_ordering(1007) 00:18:31.132 fused_ordering(1008) 00:18:31.132 fused_ordering(1009) 00:18:31.132 fused_ordering(1010) 00:18:31.132 fused_ordering(1011) 00:18:31.132 fused_ordering(1012) 00:18:31.132 fused_ordering(1013) 00:18:31.132 fused_ordering(1014) 00:18:31.132 fused_ordering(1015) 00:18:31.132 fused_ordering(1016) 00:18:31.132 fused_ordering(1017) 00:18:31.132 fused_ordering(1018) 00:18:31.133 fused_ordering(1019) 00:18:31.133 fused_ordering(1020) 00:18:31.133 fused_ordering(1021) 00:18:31.133 fused_ordering(1022) 00:18:31.133 fused_ordering(1023) 00:18:31.133 03:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:31.133 03:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:31.133 03:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:31.133 03:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:18:31.133 03:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:31.133 03:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:18:31.133 03:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:31.133 03:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:31.133 rmmod nvme_tcp 00:18:31.133 rmmod nvme_fabrics 00:18:31.133 rmmod nvme_keyring 00:18:31.133 03:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:31.133 03:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:18:31.133 03:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:18:31.133 03:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2655309 ']' 00:18:31.133 03:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2655309 00:18:31.133 03:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2655309 ']' 00:18:31.133 03:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2655309 00:18:31.390 03:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:18:31.390 03:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:31.390 03:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2655309 00:18:31.390 03:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:31.390 03:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:31.390 03:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2655309' 00:18:31.390 killing process with pid 2655309 00:18:31.390 03:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2655309 00:18:31.391 03:29:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2655309 00:18:32.324 03:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:32.324 03:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:32.324 03:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:32.324 03:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:18:32.324 03:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:18:32.324 03:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:32.324 03:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:18:32.324 03:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:32.324 03:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:32.324 03:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:32.324 03:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:32.324 03:29:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.856 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:34.856 00:18:34.856 real 0m12.151s 00:18:34.856 user 0m7.208s 00:18:34.856 sys 0m5.681s 00:18:34.856 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:34.856 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:34.856 ************************************ 00:18:34.856 END TEST nvmf_fused_ordering 00:18:34.856 ************************************ 00:18:34.856 03:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:18:34.856 03:29:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:34.856 03:29:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:34.856 03:29:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:34.856 ************************************ 00:18:34.856 START TEST nvmf_ns_masking 00:18:34.856 ************************************ 00:18:34.856 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:18:34.856 * Looking for test storage... 00:18:34.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:34.856 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:34.856 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:18:34.856 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:34.856 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:34.856 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:34.856 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:34.856 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:34.856 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:18:34.856 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:18:34.856 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:18:34.856 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:18:34.856 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:18:34.856 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:18:34.856 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:18:34.856 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:34.856 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:18:34.856 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:18:34.856 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:34.856 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:34.856 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:18:34.856 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:34.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.857 --rc genhtml_branch_coverage=1 00:18:34.857 --rc genhtml_function_coverage=1 00:18:34.857 --rc genhtml_legend=1 00:18:34.857 --rc geninfo_all_blocks=1 00:18:34.857 --rc geninfo_unexecuted_blocks=1 00:18:34.857 00:18:34.857 ' 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:34.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.857 --rc genhtml_branch_coverage=1 00:18:34.857 --rc genhtml_function_coverage=1 00:18:34.857 --rc genhtml_legend=1 00:18:34.857 --rc geninfo_all_blocks=1 00:18:34.857 --rc geninfo_unexecuted_blocks=1 00:18:34.857 00:18:34.857 ' 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:34.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.857 --rc genhtml_branch_coverage=1 00:18:34.857 --rc genhtml_function_coverage=1 00:18:34.857 --rc genhtml_legend=1 00:18:34.857 --rc geninfo_all_blocks=1 00:18:34.857 --rc geninfo_unexecuted_blocks=1 00:18:34.857 00:18:34.857 ' 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:34.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.857 --rc genhtml_branch_coverage=1 00:18:34.857 --rc genhtml_function_coverage=1 00:18:34.857 --rc genhtml_legend=1 00:18:34.857 --rc geninfo_all_blocks=1 00:18:34.857 --rc geninfo_unexecuted_blocks=1 00:18:34.857 00:18:34.857 ' 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:34.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=e9e5bb18-9cb5-4e55-abea-2d747ed077bd 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=2b83d785-34f0-46fe-a640-082d1f345537 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=5b10dc43-4194-40cd-a0b6-b74671ba52cb 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:34.857 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:18:34.858 03:29:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:40.125 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:40.125 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:18:40.125 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:40.125 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:40.125 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:40.125 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:40.125 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:40.125 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:18:40.125 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:40.125 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:18:40.125 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:18:40.125 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:18:40.125 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:18:40.125 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:18:40.125 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:18:40.125 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:40.125 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:40.125 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:40.125 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:40.125 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:40.125 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:40.125 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:40.125 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:40.125 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:40.125 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:40.125 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:40.125 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:40.125 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:40.125 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:40.125 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:40.125 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:40.126 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:40.126 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:40.126 Found net devices under 0000:af:00.0: cvl_0_0 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:40.126 Found net devices under 0000:af:00.1: cvl_0_1 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:40.126 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:40.385 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:40.385 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:40.385 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:40.385 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:40.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:40.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.333 ms 00:18:40.385 00:18:40.385 --- 10.0.0.2 ping statistics --- 00:18:40.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.385 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:18:40.385 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:40.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:40.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:18:40.385 00:18:40.385 --- 10.0.0.1 ping statistics --- 00:18:40.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.385 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:18:40.385 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:40.385 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:18:40.385 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:40.385 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:40.385 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:40.385 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:40.385 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:40.385 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:40.385 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:40.385 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:40.385 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:40.385 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:40.385 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:40.385 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2659462 00:18:40.385 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2659462 00:18:40.385 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:40.385 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2659462 ']' 00:18:40.385 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.385 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:40.385 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.385 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:40.385 03:29:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:40.385 [2024-12-13 03:29:41.496825] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:40.385 [2024-12-13 03:29:41.496934] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:40.643 [2024-12-13 03:29:41.616731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.644 [2024-12-13 03:29:41.720407] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:40.644 [2024-12-13 03:29:41.720453] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:40.644 [2024-12-13 03:29:41.720463] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:40.644 [2024-12-13 03:29:41.720474] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:40.644 [2024-12-13 03:29:41.720484] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:40.644 [2024-12-13 03:29:41.721937] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.210 03:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.210 03:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:41.210 03:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:41.210 03:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:41.210 03:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:41.210 03:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:41.210 03:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:41.468 [2024-12-13 03:29:42.506302] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:41.468 03:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:41.468 03:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:41.468 03:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:41.726 Malloc1 00:18:41.726 03:29:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:41.984 Malloc2 00:18:41.984 03:29:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:42.242 03:29:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:42.242 03:29:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:42.500 [2024-12-13 03:29:43.563366] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:42.500 03:29:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:42.500 03:29:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5b10dc43-4194-40cd-a0b6-b74671ba52cb -a 10.0.0.2 -s 4420 -i 4 00:18:42.759 03:29:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:42.759 03:29:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:42.759 03:29:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:42.759 03:29:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:42.759 03:29:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:44.660 03:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:44.660 03:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:44.660 03:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:44.660 03:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:44.660 03:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:44.660 03:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:44.660 03:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:44.660 03:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:44.660 03:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:44.660 03:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:44.660 03:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:44.660 03:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:44.660 03:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:44.660 [ 0]:0x1 00:18:44.660 03:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:44.660 03:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:44.660 03:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cc1c87bf720f49c499d2404df2fe59f0 00:18:44.660 03:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cc1c87bf720f49c499d2404df2fe59f0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:44.660 03:29:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:44.919 03:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:44.919 03:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:44.919 03:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:44.919 [ 0]:0x1 00:18:44.919 03:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:44.919 03:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:44.919 03:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cc1c87bf720f49c499d2404df2fe59f0 00:18:44.919 03:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cc1c87bf720f49c499d2404df2fe59f0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:44.919 03:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:44.919 03:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:44.919 03:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:44.919 [ 1]:0x2 00:18:44.919 03:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:44.919 03:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:44.919 03:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=895cce70262f4d6da8b42dc3887fd6a3 00:18:44.919 03:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 895cce70262f4d6da8b42dc3887fd6a3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:44.919 03:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:44.919 03:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:45.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:45.178 03:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:45.436 03:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:45.695 03:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:45.695 03:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5b10dc43-4194-40cd-a0b6-b74671ba52cb -a 10.0.0.2 -s 4420 -i 4 00:18:45.954 03:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:45.954 03:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:45.954 03:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:45.954 03:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:18:45.954 03:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:18:45.954 03:29:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:47.857 03:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:47.857 03:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:47.857 03:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:47.857 03:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:47.857 03:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:47.857 03:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:47.857 03:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:47.857 03:29:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:47.857 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:47.857 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:47.857 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:47.857 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:47.857 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:47.857 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:47.857 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:47.857 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:47.857 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:47.857 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:47.857 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:47.857 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:48.116 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:48.116 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:48.116 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:48.116 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:48.116 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:48.116 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:48.116 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:48.116 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:48.116 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:48.116 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:48.116 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:48.116 [ 0]:0x2 00:18:48.116 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:48.116 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:48.116 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=895cce70262f4d6da8b42dc3887fd6a3 00:18:48.116 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 895cce70262f4d6da8b42dc3887fd6a3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:48.116 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:48.375 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:48.375 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:48.375 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:48.375 [ 0]:0x1 00:18:48.375 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:48.375 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:48.375 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cc1c87bf720f49c499d2404df2fe59f0 00:18:48.375 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cc1c87bf720f49c499d2404df2fe59f0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:48.375 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:48.375 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:48.375 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:48.375 [ 1]:0x2 00:18:48.375 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:48.375 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:48.375 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=895cce70262f4d6da8b42dc3887fd6a3 00:18:48.375 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 895cce70262f4d6da8b42dc3887fd6a3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:48.375 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:48.634 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:48.634 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:48.634 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:48.634 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:48.634 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:48.634 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:48.634 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:48.634 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:48.634 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:48.634 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:48.634 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:48.634 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:48.634 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:48.634 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:48.634 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:48.634 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:48.634 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:48.634 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:48.634 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:48.634 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:48.634 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:48.634 [ 0]:0x2 00:18:48.634 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:48.634 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:48.634 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=895cce70262f4d6da8b42dc3887fd6a3 00:18:48.634 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 895cce70262f4d6da8b42dc3887fd6a3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:48.634 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:48.634 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:48.892 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:48.892 03:29:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:48.892 03:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:48.892 03:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5b10dc43-4194-40cd-a0b6-b74671ba52cb -a 10.0.0.2 -s 4420 -i 4 00:18:49.150 03:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:49.150 03:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:49.150 03:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:49.150 03:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:49.150 03:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:49.150 03:29:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:51.049 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:51.049 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:51.049 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:51.049 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:51.049 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:51.049 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:51.049 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:51.049 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:51.049 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:51.049 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:51.049 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:51.049 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:51.049 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:51.049 [ 0]:0x1 00:18:51.049 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:51.049 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:51.049 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=cc1c87bf720f49c499d2404df2fe59f0 00:18:51.049 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ cc1c87bf720f49c499d2404df2fe59f0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:51.049 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:51.049 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:51.049 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:51.307 [ 1]:0x2 00:18:51.307 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:51.307 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:51.307 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=895cce70262f4d6da8b42dc3887fd6a3 00:18:51.307 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 895cce70262f4d6da8b42dc3887fd6a3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:51.307 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:51.307 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:51.307 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:51.307 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:51.307 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:51.307 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:51.307 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:51.307 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:51.307 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:51.307 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:51.307 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:51.566 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:51.566 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:51.566 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:51.566 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:51.566 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:51.566 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:51.566 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:51.566 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:51.566 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:51.566 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:51.566 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:51.566 [ 0]:0x2 00:18:51.566 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:51.566 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:51.566 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=895cce70262f4d6da8b42dc3887fd6a3 00:18:51.566 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 895cce70262f4d6da8b42dc3887fd6a3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:51.566 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:51.566 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:51.566 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:51.566 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:51.566 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:51.566 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:51.566 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:51.566 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:51.566 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:51.566 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:51.566 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:51.566 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:51.824 [2024-12-13 03:29:52.783788] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:51.824 request: 00:18:51.824 { 00:18:51.824 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:51.824 "nsid": 2, 00:18:51.824 "host": "nqn.2016-06.io.spdk:host1", 00:18:51.824 "method": "nvmf_ns_remove_host", 00:18:51.824 "req_id": 1 00:18:51.824 } 00:18:51.824 Got JSON-RPC error response 00:18:51.824 response: 00:18:51.824 { 00:18:51.824 "code": -32602, 00:18:51.824 "message": "Invalid parameters" 00:18:51.824 } 00:18:51.824 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:51.824 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:51.824 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:51.824 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:51.824 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:51.824 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:51.824 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:51.824 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:51.824 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:51.824 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:51.824 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:51.824 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:51.824 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:51.824 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:51.824 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:51.824 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:51.824 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:51.824 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:51.824 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:51.824 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:51.824 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:51.824 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:51.824 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:51.824 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:51.824 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:51.824 [ 0]:0x2 00:18:51.824 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:51.824 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:51.824 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=895cce70262f4d6da8b42dc3887fd6a3 00:18:51.824 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 895cce70262f4d6da8b42dc3887fd6a3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:51.824 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:51.824 03:29:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:52.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:52.082 03:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2661419 00:18:52.082 03:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:52.083 03:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:52.083 03:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2661419 /var/tmp/host.sock 00:18:52.083 03:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2661419 ']' 00:18:52.083 03:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:52.083 03:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:52.083 03:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:52.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:52.083 03:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:52.083 03:29:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:52.083 [2024-12-13 03:29:53.186009] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:52.083 [2024-12-13 03:29:53.186113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2661419 ] 00:18:52.340 [2024-12-13 03:29:53.298015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.340 [2024-12-13 03:29:53.404680] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.274 03:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:53.274 03:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:53.275 03:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:53.275 03:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:53.532 03:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid e9e5bb18-9cb5-4e55-abea-2d747ed077bd 00:18:53.532 03:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:53.532 03:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g E9E5BB189CB54E55ABEA2D747ED077BD -i 00:18:53.790 03:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 2b83d785-34f0-46fe-a640-082d1f345537 00:18:53.790 03:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:53.790 03:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 2B83D78534F046FEA640082D1F345537 -i 00:18:53.790 03:29:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:54.048 03:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:54.305 03:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:54.305 03:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:54.562 nvme0n1 00:18:54.562 03:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:54.562 03:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:54.820 nvme1n2 00:18:54.820 03:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:54.820 03:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:54.820 03:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:54.820 03:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:54.820 03:29:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:55.078 03:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:55.078 03:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:55.078 03:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:55.078 03:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:55.078 03:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ e9e5bb18-9cb5-4e55-abea-2d747ed077bd == \e\9\e\5\b\b\1\8\-\9\c\b\5\-\4\e\5\5\-\a\b\e\a\-\2\d\7\4\7\e\d\0\7\7\b\d ]] 00:18:55.078 03:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:55.078 03:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:55.078 03:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:55.336 03:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 2b83d785-34f0-46fe-a640-082d1f345537 == \2\b\8\3\d\7\8\5\-\3\4\f\0\-\4\6\f\e\-\a\6\4\0\-\0\8\2\d\1\f\3\4\5\5\3\7 ]] 00:18:55.336 03:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:55.593 03:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:55.851 03:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid e9e5bb18-9cb5-4e55-abea-2d747ed077bd 00:18:55.851 03:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:55.851 03:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g E9E5BB189CB54E55ABEA2D747ED077BD 00:18:55.851 03:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:55.851 03:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g E9E5BB189CB54E55ABEA2D747ED077BD 00:18:55.851 03:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:55.851 03:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.851 03:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:55.851 03:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.851 03:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:55.851 03:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.851 03:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:55.851 03:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:55.851 03:29:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g E9E5BB189CB54E55ABEA2D747ED077BD 00:18:55.851 [2024-12-13 03:29:57.009053] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:18:55.851 [2024-12-13 03:29:57.009095] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:18:55.851 [2024-12-13 03:29:57.009115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.851 request: 00:18:55.851 { 00:18:55.851 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.851 "namespace": { 00:18:55.851 "bdev_name": "invalid", 00:18:55.851 "nsid": 1, 00:18:55.851 "nguid": "E9E5BB189CB54E55ABEA2D747ED077BD", 00:18:55.851 "no_auto_visible": false, 00:18:55.851 "hide_metadata": false 00:18:55.851 }, 00:18:55.851 "method": "nvmf_subsystem_add_ns", 00:18:55.851 "req_id": 1 00:18:55.851 } 00:18:55.851 Got JSON-RPC error response 00:18:55.851 response: 00:18:55.851 { 00:18:55.851 "code": -32602, 00:18:55.851 "message": "Invalid parameters" 00:18:55.851 } 00:18:55.851 03:29:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:55.851 03:29:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:55.851 03:29:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:55.851 03:29:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:55.851 03:29:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid e9e5bb18-9cb5-4e55-abea-2d747ed077bd 00:18:55.851 03:29:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:55.851 03:29:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g E9E5BB189CB54E55ABEA2D747ED077BD -i 00:18:56.108 03:29:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:18:58.636 03:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:18:58.636 03:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:18:58.636 03:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:58.636 03:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:18:58.636 03:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2661419 00:18:58.636 03:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2661419 ']' 00:18:58.636 03:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2661419 00:18:58.636 03:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:58.636 03:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:58.636 03:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2661419 00:18:58.636 03:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:58.636 03:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:58.636 03:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2661419' 00:18:58.636 killing process with pid 2661419 00:18:58.636 03:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2661419 00:18:58.636 03:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2661419 00:19:01.164 03:30:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:01.164 03:30:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:19:01.164 03:30:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:19:01.164 03:30:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:01.164 03:30:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:19:01.164 03:30:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:01.164 03:30:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:19:01.164 03:30:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:01.164 03:30:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:01.164 rmmod nvme_tcp 00:19:01.164 rmmod nvme_fabrics 00:19:01.164 rmmod nvme_keyring 00:19:01.164 03:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:01.164 03:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:19:01.164 03:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:19:01.164 03:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2659462 ']' 00:19:01.164 03:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2659462 00:19:01.164 03:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2659462 ']' 00:19:01.164 03:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2659462 00:19:01.164 03:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:19:01.164 03:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:01.164 03:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2659462 00:19:01.164 03:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:01.164 03:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:01.164 03:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2659462' 00:19:01.164 killing process with pid 2659462 00:19:01.164 03:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2659462 00:19:01.164 03:30:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2659462 00:19:02.539 03:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:02.539 03:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:02.539 03:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:02.539 03:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:19:02.539 03:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:19:02.539 03:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:02.539 03:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:19:02.539 03:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:02.539 03:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:02.539 03:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.539 03:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:02.539 03:30:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:04.438 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:04.438 00:19:04.438 real 0m30.023s 00:19:04.438 user 0m37.562s 00:19:04.438 sys 0m6.918s 00:19:04.439 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:04.439 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:04.439 ************************************ 00:19:04.439 END TEST nvmf_ns_masking 00:19:04.439 ************************************ 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:04.696 ************************************ 00:19:04.696 START TEST nvmf_nvme_cli 00:19:04.696 ************************************ 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:04.696 * Looking for test storage... 00:19:04.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:04.696 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:04.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.697 --rc genhtml_branch_coverage=1 00:19:04.697 --rc genhtml_function_coverage=1 00:19:04.697 --rc genhtml_legend=1 00:19:04.697 --rc geninfo_all_blocks=1 00:19:04.697 --rc geninfo_unexecuted_blocks=1 00:19:04.697 00:19:04.697 ' 00:19:04.697 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:04.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.697 --rc genhtml_branch_coverage=1 00:19:04.697 --rc genhtml_function_coverage=1 00:19:04.697 --rc genhtml_legend=1 00:19:04.697 --rc geninfo_all_blocks=1 00:19:04.697 --rc geninfo_unexecuted_blocks=1 00:19:04.697 00:19:04.697 ' 00:19:04.697 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:04.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.697 --rc genhtml_branch_coverage=1 00:19:04.697 --rc genhtml_function_coverage=1 00:19:04.697 --rc genhtml_legend=1 00:19:04.697 --rc geninfo_all_blocks=1 00:19:04.697 --rc geninfo_unexecuted_blocks=1 00:19:04.697 00:19:04.697 ' 00:19:04.697 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:04.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.697 --rc genhtml_branch_coverage=1 00:19:04.697 --rc genhtml_function_coverage=1 00:19:04.697 --rc genhtml_legend=1 00:19:04.697 --rc geninfo_all_blocks=1 00:19:04.697 --rc geninfo_unexecuted_blocks=1 00:19:04.697 00:19:04.697 ' 00:19:04.697 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:04.697 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:19:04.697 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:04.697 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:04.697 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:04.697 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:04.697 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:04.697 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:04.697 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:04.697 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:04.697 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:04.697 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:04.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:19:04.955 03:30:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:10.337 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:10.337 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:10.337 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:10.338 Found net devices under 0000:af:00.0: cvl_0_0 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:10.338 Found net devices under 0000:af:00.1: cvl_0_1 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:10.338 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:10.597 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:10.597 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:10.597 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:10.597 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:10.597 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:10.597 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:10.597 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:10.597 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.369 ms 00:19:10.598 00:19:10.598 --- 10.0.0.2 ping statistics --- 00:19:10.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.598 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:19:10.598 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:10.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:10.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:19:10.598 00:19:10.598 --- 10.0.0.1 ping statistics --- 00:19:10.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.598 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:19:10.598 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:10.598 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:19:10.598 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:10.598 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:10.598 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:10.598 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:10.598 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:10.598 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:10.598 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:10.598 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:19:10.598 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:10.598 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:10.598 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:10.598 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2667216 00:19:10.598 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2667216 00:19:10.598 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:10.598 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2667216 ']' 00:19:10.598 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.598 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:10.598 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.598 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:10.598 03:30:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:10.598 [2024-12-13 03:30:11.780310] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:10.598 [2024-12-13 03:30:11.780404] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:10.857 [2024-12-13 03:30:11.898814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:10.857 [2024-12-13 03:30:12.008186] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:10.857 [2024-12-13 03:30:12.008226] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:10.857 [2024-12-13 03:30:12.008236] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:10.857 [2024-12-13 03:30:12.008262] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:10.857 [2024-12-13 03:30:12.008270] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:10.857 [2024-12-13 03:30:12.010626] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:10.857 [2024-12-13 03:30:12.010700] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:10.857 [2024-12-13 03:30:12.010793] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.857 [2024-12-13 03:30:12.010802] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:19:11.425 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:11.425 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:19:11.425 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:11.425 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:11.425 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:11.425 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:11.425 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:11.425 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.425 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:11.684 [2024-12-13 03:30:12.634803] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:11.684 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.684 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:11.684 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.684 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:11.684 Malloc0 00:19:11.684 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.684 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:11.684 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.684 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:11.684 Malloc1 00:19:11.684 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.684 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:19:11.684 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.684 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:11.684 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.684 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:11.684 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.684 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:11.684 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.684 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:11.684 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.684 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:11.684 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.684 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:11.684 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.684 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:11.684 [2024-12-13 03:30:12.837599] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:11.684 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.684 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:11.684 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.684 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:11.684 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.684 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:19:11.943 00:19:11.943 Discovery Log Number of Records 2, Generation counter 2 00:19:11.943 =====Discovery Log Entry 0====== 00:19:11.943 trtype: tcp 00:19:11.943 adrfam: ipv4 00:19:11.943 subtype: current discovery subsystem 00:19:11.943 treq: not required 00:19:11.943 portid: 0 00:19:11.943 trsvcid: 4420 00:19:11.943 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:11.943 traddr: 10.0.0.2 00:19:11.943 eflags: explicit discovery connections, duplicate discovery information 00:19:11.943 sectype: none 00:19:11.943 =====Discovery Log Entry 1====== 00:19:11.943 trtype: tcp 00:19:11.943 adrfam: ipv4 00:19:11.943 subtype: nvme subsystem 00:19:11.943 treq: not required 00:19:11.943 portid: 0 00:19:11.943 trsvcid: 4420 00:19:11.943 subnqn: nqn.2016-06.io.spdk:cnode1 00:19:11.943 traddr: 10.0.0.2 00:19:11.943 eflags: none 00:19:11.943 sectype: none 00:19:11.943 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:19:11.943 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:19:11.943 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:11.943 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:11.943 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:11.943 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:11.943 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:11.943 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:11.943 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:11.943 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:19:11.943 03:30:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:13.321 03:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:13.321 03:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:19:13.321 03:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:13.321 03:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:19:13.321 03:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:19:13.321 03:30:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:19:15.225 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:15.225 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:15.225 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:15.225 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:19:15.225 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:15.225 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:19:15.225 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:19:15.225 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:15.225 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:15.225 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:15.225 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:15.225 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:15.225 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:15.225 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:15.225 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:15.225 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:19:15.225 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:15.225 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:15.225 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:19:15.225 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:15.225 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:19:15.225 /dev/nvme0n2 ]] 00:19:15.225 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:19:15.225 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:19:15.225 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:15.225 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:15.225 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:15.484 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:15.484 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:15.484 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:15.484 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:15.484 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:15.484 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:19:15.484 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:15.484 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:15.484 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:19:15.484 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:15.484 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:19:15.484 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:15.743 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:15.744 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:15.744 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:19:15.744 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:15.744 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:15.744 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:15.744 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:15.744 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:19:15.744 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:19:15.744 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:15.744 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.744 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:15.744 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.744 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:19:15.744 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:19:15.744 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:15.744 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:19:15.744 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:15.744 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:19:15.744 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:15.744 03:30:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:15.744 rmmod nvme_tcp 00:19:16.003 rmmod nvme_fabrics 00:19:16.003 rmmod nvme_keyring 00:19:16.003 03:30:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:16.003 03:30:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:19:16.003 03:30:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:19:16.003 03:30:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2667216 ']' 00:19:16.003 03:30:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2667216 00:19:16.003 03:30:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2667216 ']' 00:19:16.003 03:30:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2667216 00:19:16.003 03:30:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:19:16.003 03:30:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:16.003 03:30:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2667216 00:19:16.003 03:30:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:16.003 03:30:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:16.003 03:30:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2667216' 00:19:16.003 killing process with pid 2667216 00:19:16.003 03:30:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2667216 00:19:16.003 03:30:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2667216 00:19:17.381 03:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:17.381 03:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:17.381 03:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:17.381 03:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:19:17.640 03:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:19:17.640 03:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:17.640 03:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:19:17.640 03:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:17.640 03:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:17.640 03:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.640 03:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:17.640 03:30:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.547 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:19.547 00:19:19.547 real 0m14.944s 00:19:19.547 user 0m26.935s 00:19:19.547 sys 0m5.148s 00:19:19.547 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:19.547 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:19.547 ************************************ 00:19:19.547 END TEST nvmf_nvme_cli 00:19:19.547 ************************************ 00:19:19.547 03:30:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:19:19.547 03:30:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:19.547 03:30:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:19.547 03:30:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:19.547 03:30:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:19.547 ************************************ 00:19:19.547 START TEST nvmf_auth_target 00:19:19.547 ************************************ 00:19:19.547 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:19.807 * Looking for test storage... 00:19:19.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:19.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:19.807 --rc genhtml_branch_coverage=1 00:19:19.807 --rc genhtml_function_coverage=1 00:19:19.807 --rc genhtml_legend=1 00:19:19.807 --rc geninfo_all_blocks=1 00:19:19.807 --rc geninfo_unexecuted_blocks=1 00:19:19.807 00:19:19.807 ' 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:19.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:19.807 --rc genhtml_branch_coverage=1 00:19:19.807 --rc genhtml_function_coverage=1 00:19:19.807 --rc genhtml_legend=1 00:19:19.807 --rc geninfo_all_blocks=1 00:19:19.807 --rc geninfo_unexecuted_blocks=1 00:19:19.807 00:19:19.807 ' 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:19.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:19.807 --rc genhtml_branch_coverage=1 00:19:19.807 --rc genhtml_function_coverage=1 00:19:19.807 --rc genhtml_legend=1 00:19:19.807 --rc geninfo_all_blocks=1 00:19:19.807 --rc geninfo_unexecuted_blocks=1 00:19:19.807 00:19:19.807 ' 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:19.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:19.807 --rc genhtml_branch_coverage=1 00:19:19.807 --rc genhtml_function_coverage=1 00:19:19.807 --rc genhtml_legend=1 00:19:19.807 --rc geninfo_all_blocks=1 00:19:19.807 --rc geninfo_unexecuted_blocks=1 00:19:19.807 00:19:19.807 ' 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:19.807 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:19.808 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:19.808 03:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:25.083 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:25.083 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:25.084 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:25.084 Found net devices under 0000:af:00.0: cvl_0_0 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:25.084 Found net devices under 0000:af:00.1: cvl_0_1 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:25.084 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:25.343 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:25.343 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:25.343 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:25.343 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:25.343 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:25.343 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:25.343 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:25.343 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:25.343 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:25.343 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.338 ms 00:19:25.343 00:19:25.343 --- 10.0.0.2 ping statistics --- 00:19:25.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.343 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:19:25.343 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:25.343 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:25.343 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:19:25.343 00:19:25.343 --- 10.0.0.1 ping statistics --- 00:19:25.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.343 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:19:25.343 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:25.343 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:19:25.343 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:25.343 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:25.343 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:25.343 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:25.343 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:25.343 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:25.343 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:25.343 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:25.343 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:25.343 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:25.343 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.343 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2671633 00:19:25.343 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:25.343 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2671633 00:19:25.343 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2671633 ']' 00:19:25.343 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.343 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.343 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.343 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.343 03:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.280 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.280 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:26.280 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:26.281 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:26.281 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.281 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:26.281 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2671867 00:19:26.281 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:26.281 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:26.281 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:26.281 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:26.281 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:26.281 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:26.281 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:19:26.281 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:26.281 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:26.281 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=539da41be3dead16bb7da8019ed5aef1b2026a7e54258560 00:19:26.281 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:26.281 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.yOt 00:19:26.281 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 539da41be3dead16bb7da8019ed5aef1b2026a7e54258560 0 00:19:26.281 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 539da41be3dead16bb7da8019ed5aef1b2026a7e54258560 0 00:19:26.281 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:26.281 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:26.281 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=539da41be3dead16bb7da8019ed5aef1b2026a7e54258560 00:19:26.281 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:19:26.281 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:26.540 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.yOt 00:19:26.540 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.yOt 00:19:26.540 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.yOt 00:19:26.540 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:26.540 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:26.540 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:26.540 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:26.540 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:26.540 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:26.540 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:26.540 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=520f433a71e2fe444beae16cbfcde30985db03ad535884d8ca3b399a5fe71fb4 00:19:26.540 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:26.540 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.zdr 00:19:26.540 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 520f433a71e2fe444beae16cbfcde30985db03ad535884d8ca3b399a5fe71fb4 3 00:19:26.540 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 520f433a71e2fe444beae16cbfcde30985db03ad535884d8ca3b399a5fe71fb4 3 00:19:26.540 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:26.540 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:26.540 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=520f433a71e2fe444beae16cbfcde30985db03ad535884d8ca3b399a5fe71fb4 00:19:26.540 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:26.540 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:26.540 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.zdr 00:19:26.540 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.zdr 00:19:26.540 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.zdr 00:19:26.540 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:26.540 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:26.540 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:26.540 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:26.540 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:26.540 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b8d2726aa9e9e2353d4c286ac1150210 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.BAG 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b8d2726aa9e9e2353d4c286ac1150210 1 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b8d2726aa9e9e2353d4c286ac1150210 1 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b8d2726aa9e9e2353d4c286ac1150210 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.BAG 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.BAG 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.BAG 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fbe1cf20394fa30465f07e3fb0a93e4d30420bd76fb56030 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.gbr 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fbe1cf20394fa30465f07e3fb0a93e4d30420bd76fb56030 2 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fbe1cf20394fa30465f07e3fb0a93e4d30420bd76fb56030 2 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fbe1cf20394fa30465f07e3fb0a93e4d30420bd76fb56030 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.gbr 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.gbr 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.gbr 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=25a3adbc3cf4387e3bc5b9c1910da87861257d1c66fbe55f 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.lhP 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 25a3adbc3cf4387e3bc5b9c1910da87861257d1c66fbe55f 2 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 25a3adbc3cf4387e3bc5b9c1910da87861257d1c66fbe55f 2 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=25a3adbc3cf4387e3bc5b9c1910da87861257d1c66fbe55f 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.lhP 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.lhP 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.lhP 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:26.541 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3ffd537bc210445e85d29e1a559873f1 00:19:26.800 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:26.800 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.0Wk 00:19:26.800 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3ffd537bc210445e85d29e1a559873f1 1 00:19:26.800 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3ffd537bc210445e85d29e1a559873f1 1 00:19:26.800 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:26.800 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:26.800 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3ffd537bc210445e85d29e1a559873f1 00:19:26.801 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:26.801 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:26.801 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.0Wk 00:19:26.801 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.0Wk 00:19:26.801 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.0Wk 00:19:26.801 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:26.801 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:26.801 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:26.801 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:26.801 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:26.801 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:26.801 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:26.801 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3fa6ad9d931c43f7ae30388e1cd2c40a6ff70fceb7f4399f0918d0c521e749a4 00:19:26.801 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:26.801 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.I40 00:19:26.801 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3fa6ad9d931c43f7ae30388e1cd2c40a6ff70fceb7f4399f0918d0c521e749a4 3 00:19:26.801 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3fa6ad9d931c43f7ae30388e1cd2c40a6ff70fceb7f4399f0918d0c521e749a4 3 00:19:26.801 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:26.801 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:26.801 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3fa6ad9d931c43f7ae30388e1cd2c40a6ff70fceb7f4399f0918d0c521e749a4 00:19:26.801 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:26.801 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:26.801 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.I40 00:19:26.801 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.I40 00:19:26.801 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.I40 00:19:26.801 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:26.801 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2671633 00:19:26.801 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2671633 ']' 00:19:26.801 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.801 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:26.801 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.801 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:26.801 03:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.060 03:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.060 03:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:27.060 03:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2671867 /var/tmp/host.sock 00:19:27.060 03:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2671867 ']' 00:19:27.060 03:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:27.060 03:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.060 03:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:27.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:27.060 03:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.060 03:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.628 03:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.628 03:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:27.628 03:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:27.628 03:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.628 03:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.628 03:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.628 03:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:27.628 03:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.yOt 00:19:27.628 03:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.628 03:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.628 03:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.628 03:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.yOt 00:19:27.628 03:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.yOt 00:19:27.628 03:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.zdr ]] 00:19:27.628 03:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zdr 00:19:27.628 03:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.628 03:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.628 03:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.628 03:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zdr 00:19:27.628 03:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zdr 00:19:27.887 03:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:27.887 03:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.BAG 00:19:27.887 03:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.887 03:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.887 03:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.887 03:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.BAG 00:19:27.887 03:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.BAG 00:19:28.146 03:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.gbr ]] 00:19:28.146 03:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gbr 00:19:28.146 03:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.146 03:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.146 03:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.146 03:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gbr 00:19:28.146 03:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gbr 00:19:28.407 03:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:28.407 03:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.lhP 00:19:28.407 03:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.407 03:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.407 03:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.407 03:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.lhP 00:19:28.407 03:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.lhP 00:19:28.407 03:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.0Wk ]] 00:19:28.407 03:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.0Wk 00:19:28.407 03:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.407 03:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.667 03:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.667 03:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.0Wk 00:19:28.667 03:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.0Wk 00:19:28.667 03:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:28.667 03:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.I40 00:19:28.667 03:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.667 03:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.667 03:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.667 03:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.I40 00:19:28.667 03:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.I40 00:19:28.927 03:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:28.927 03:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:28.927 03:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:28.927 03:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:28.927 03:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:28.927 03:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:29.186 03:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:29.186 03:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:29.186 03:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:29.186 03:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:29.186 03:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:29.186 03:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.186 03:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.186 03:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.186 03:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.186 03:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.186 03:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.186 03:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.186 03:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.445 00:19:29.445 03:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:29.445 03:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:29.445 03:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.704 03:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.704 03:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.704 03:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.704 03:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.704 03:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.704 03:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:29.704 { 00:19:29.704 "cntlid": 1, 00:19:29.704 "qid": 0, 00:19:29.704 "state": "enabled", 00:19:29.704 "thread": "nvmf_tgt_poll_group_000", 00:19:29.704 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:29.704 "listen_address": { 00:19:29.704 "trtype": "TCP", 00:19:29.704 "adrfam": "IPv4", 00:19:29.704 "traddr": "10.0.0.2", 00:19:29.704 "trsvcid": "4420" 00:19:29.704 }, 00:19:29.704 "peer_address": { 00:19:29.704 "trtype": "TCP", 00:19:29.704 "adrfam": "IPv4", 00:19:29.704 "traddr": "10.0.0.1", 00:19:29.704 "trsvcid": "33500" 00:19:29.704 }, 00:19:29.704 "auth": { 00:19:29.704 "state": "completed", 00:19:29.704 "digest": "sha256", 00:19:29.704 "dhgroup": "null" 00:19:29.704 } 00:19:29.704 } 00:19:29.704 ]' 00:19:29.704 03:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:29.704 03:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:29.704 03:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:29.704 03:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:29.704 03:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:29.704 03:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.704 03:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.704 03:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.963 03:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:19:29.963 03:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:19:30.530 03:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.531 03:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:30.531 03:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.531 03:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.531 03:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.531 03:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:30.531 03:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:30.531 03:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:30.789 03:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:30.789 03:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:30.789 03:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:30.789 03:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:30.789 03:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:30.789 03:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.789 03:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.789 03:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.789 03:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.789 03:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.789 03:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.789 03:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.789 03:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.048 00:19:31.048 03:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:31.048 03:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:31.048 03:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.048 03:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.048 03:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.048 03:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.048 03:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.307 03:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.307 03:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.307 { 00:19:31.307 "cntlid": 3, 00:19:31.307 "qid": 0, 00:19:31.307 "state": "enabled", 00:19:31.307 "thread": "nvmf_tgt_poll_group_000", 00:19:31.307 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:31.307 "listen_address": { 00:19:31.307 "trtype": "TCP", 00:19:31.307 "adrfam": "IPv4", 00:19:31.307 "traddr": "10.0.0.2", 00:19:31.307 "trsvcid": "4420" 00:19:31.307 }, 00:19:31.307 "peer_address": { 00:19:31.307 "trtype": "TCP", 00:19:31.307 "adrfam": "IPv4", 00:19:31.307 "traddr": "10.0.0.1", 00:19:31.307 "trsvcid": "33518" 00:19:31.307 }, 00:19:31.307 "auth": { 00:19:31.307 "state": "completed", 00:19:31.307 "digest": "sha256", 00:19:31.307 "dhgroup": "null" 00:19:31.307 } 00:19:31.307 } 00:19:31.307 ]' 00:19:31.307 03:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.308 03:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.308 03:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.308 03:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:31.308 03:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.308 03:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.308 03:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.308 03:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.567 03:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: --dhchap-ctrl-secret DHHC-1:02:ZmJlMWNmMjAzOTRmYTMwNDY1ZjA3ZTNmYjBhOTNlNGQzMDQyMGJkNzZmYjU2MDMwspixgw==: 00:19:31.567 03:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: --dhchap-ctrl-secret DHHC-1:02:ZmJlMWNmMjAzOTRmYTMwNDY1ZjA3ZTNmYjBhOTNlNGQzMDQyMGJkNzZmYjU2MDMwspixgw==: 00:19:32.135 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.135 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:32.135 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.135 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.135 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.135 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:32.135 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:32.135 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:32.394 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:32.394 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.394 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:32.394 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:32.394 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:32.394 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.394 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.394 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.394 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.394 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.394 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.394 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.394 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.653 00:19:32.653 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:32.653 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.653 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:32.653 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.653 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.653 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.653 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.653 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.653 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:32.653 { 00:19:32.653 "cntlid": 5, 00:19:32.653 "qid": 0, 00:19:32.653 "state": "enabled", 00:19:32.653 "thread": "nvmf_tgt_poll_group_000", 00:19:32.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:32.653 "listen_address": { 00:19:32.653 "trtype": "TCP", 00:19:32.653 "adrfam": "IPv4", 00:19:32.653 "traddr": "10.0.0.2", 00:19:32.653 "trsvcid": "4420" 00:19:32.653 }, 00:19:32.653 "peer_address": { 00:19:32.653 "trtype": "TCP", 00:19:32.653 "adrfam": "IPv4", 00:19:32.653 "traddr": "10.0.0.1", 00:19:32.653 "trsvcid": "33554" 00:19:32.653 }, 00:19:32.653 "auth": { 00:19:32.653 "state": "completed", 00:19:32.653 "digest": "sha256", 00:19:32.653 "dhgroup": "null" 00:19:32.653 } 00:19:32.653 } 00:19:32.653 ]' 00:19:32.653 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:32.912 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:32.912 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:32.912 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:32.912 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:32.912 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.912 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.912 03:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.170 03:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZmZDUzN2JjMjEwNDQ1ZTg1ZDI5ZTFhNTU5ODczZjEuFw/4: 00:19:33.170 03:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZmZDUzN2JjMjEwNDQ1ZTg1ZDI5ZTFhNTU5ODczZjEuFw/4: 00:19:33.737 03:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.737 03:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:33.737 03:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.737 03:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.737 03:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.737 03:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:33.737 03:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:33.737 03:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:33.995 03:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:33.995 03:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:33.995 03:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:33.995 03:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:33.995 03:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:33.995 03:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.995 03:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:33.995 03:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.995 03:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.995 03:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.995 03:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:33.995 03:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:33.995 03:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:34.254 00:19:34.254 03:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:34.254 03:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:34.254 03:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.254 03:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.254 03:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.254 03:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.254 03:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.254 03:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.254 03:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:34.254 { 00:19:34.254 "cntlid": 7, 00:19:34.254 "qid": 0, 00:19:34.254 "state": "enabled", 00:19:34.254 "thread": "nvmf_tgt_poll_group_000", 00:19:34.254 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:34.254 "listen_address": { 00:19:34.254 "trtype": "TCP", 00:19:34.254 "adrfam": "IPv4", 00:19:34.254 "traddr": "10.0.0.2", 00:19:34.254 "trsvcid": "4420" 00:19:34.254 }, 00:19:34.254 "peer_address": { 00:19:34.254 "trtype": "TCP", 00:19:34.254 "adrfam": "IPv4", 00:19:34.254 "traddr": "10.0.0.1", 00:19:34.254 "trsvcid": "33600" 00:19:34.254 }, 00:19:34.254 "auth": { 00:19:34.254 "state": "completed", 00:19:34.254 "digest": "sha256", 00:19:34.254 "dhgroup": "null" 00:19:34.254 } 00:19:34.254 } 00:19:34.254 ]' 00:19:34.254 03:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:34.513 03:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:34.513 03:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:34.513 03:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:34.513 03:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:34.513 03:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.513 03:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.513 03:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.771 03:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:19:34.771 03:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:19:35.339 03:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.339 03:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:35.339 03:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.339 03:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.339 03:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.339 03:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:35.339 03:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:35.339 03:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:35.339 03:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:35.339 03:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:35.339 03:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:35.339 03:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:35.339 03:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:35.339 03:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:35.339 03:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.339 03:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.340 03:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.340 03:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.340 03:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.340 03:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.340 03:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.340 03:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.597 00:19:35.597 03:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.597 03:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.597 03:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.856 03:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.856 03:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.856 03:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.856 03:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.856 03:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.856 03:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.856 { 00:19:35.856 "cntlid": 9, 00:19:35.856 "qid": 0, 00:19:35.856 "state": "enabled", 00:19:35.856 "thread": "nvmf_tgt_poll_group_000", 00:19:35.856 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:35.856 "listen_address": { 00:19:35.856 "trtype": "TCP", 00:19:35.856 "adrfam": "IPv4", 00:19:35.856 "traddr": "10.0.0.2", 00:19:35.856 "trsvcid": "4420" 00:19:35.856 }, 00:19:35.856 "peer_address": { 00:19:35.856 "trtype": "TCP", 00:19:35.856 "adrfam": "IPv4", 00:19:35.856 "traddr": "10.0.0.1", 00:19:35.856 "trsvcid": "33620" 00:19:35.856 }, 00:19:35.856 "auth": { 00:19:35.856 "state": "completed", 00:19:35.856 "digest": "sha256", 00:19:35.856 "dhgroup": "ffdhe2048" 00:19:35.856 } 00:19:35.856 } 00:19:35.856 ]' 00:19:35.856 03:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.856 03:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:35.856 03:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.856 03:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:35.856 03:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:36.115 03:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.115 03:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.115 03:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.115 03:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:19:36.115 03:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:19:36.682 03:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.682 03:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:36.682 03:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.682 03:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.682 03:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.682 03:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:36.682 03:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:36.682 03:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:36.941 03:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:36.941 03:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:36.941 03:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:36.941 03:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:36.941 03:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:36.941 03:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.941 03:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.941 03:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.941 03:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.941 03:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.941 03:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.941 03:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.941 03:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.200 00:19:37.200 03:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.200 03:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.200 03:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.459 03:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.459 03:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.459 03:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.459 03:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.459 03:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.459 03:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:37.459 { 00:19:37.459 "cntlid": 11, 00:19:37.459 "qid": 0, 00:19:37.459 "state": "enabled", 00:19:37.459 "thread": "nvmf_tgt_poll_group_000", 00:19:37.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:37.459 "listen_address": { 00:19:37.459 "trtype": "TCP", 00:19:37.459 "adrfam": "IPv4", 00:19:37.459 "traddr": "10.0.0.2", 00:19:37.459 "trsvcid": "4420" 00:19:37.459 }, 00:19:37.459 "peer_address": { 00:19:37.459 "trtype": "TCP", 00:19:37.459 "adrfam": "IPv4", 00:19:37.459 "traddr": "10.0.0.1", 00:19:37.459 "trsvcid": "33658" 00:19:37.459 }, 00:19:37.459 "auth": { 00:19:37.459 "state": "completed", 00:19:37.459 "digest": "sha256", 00:19:37.459 "dhgroup": "ffdhe2048" 00:19:37.459 } 00:19:37.459 } 00:19:37.459 ]' 00:19:37.459 03:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:37.459 03:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:37.459 03:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:37.459 03:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:37.459 03:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:37.459 03:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.459 03:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.459 03:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.761 03:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: --dhchap-ctrl-secret DHHC-1:02:ZmJlMWNmMjAzOTRmYTMwNDY1ZjA3ZTNmYjBhOTNlNGQzMDQyMGJkNzZmYjU2MDMwspixgw==: 00:19:37.761 03:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: --dhchap-ctrl-secret DHHC-1:02:ZmJlMWNmMjAzOTRmYTMwNDY1ZjA3ZTNmYjBhOTNlNGQzMDQyMGJkNzZmYjU2MDMwspixgw==: 00:19:38.382 03:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.382 03:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:38.382 03:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.382 03:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.382 03:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.382 03:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:38.382 03:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:38.382 03:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:38.641 03:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:38.641 03:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:38.641 03:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:38.641 03:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:38.641 03:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:38.641 03:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.641 03:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.641 03:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.641 03:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.641 03:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.641 03:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.641 03:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.641 03:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.901 00:19:38.902 03:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:38.902 03:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:38.902 03:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.902 03:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.902 03:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.902 03:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.902 03:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.902 03:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.902 03:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:38.902 { 00:19:38.902 "cntlid": 13, 00:19:38.902 "qid": 0, 00:19:38.902 "state": "enabled", 00:19:38.902 "thread": "nvmf_tgt_poll_group_000", 00:19:38.902 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:38.902 "listen_address": { 00:19:38.902 "trtype": "TCP", 00:19:38.902 "adrfam": "IPv4", 00:19:38.902 "traddr": "10.0.0.2", 00:19:38.902 "trsvcid": "4420" 00:19:38.902 }, 00:19:38.902 "peer_address": { 00:19:38.902 "trtype": "TCP", 00:19:38.902 "adrfam": "IPv4", 00:19:38.902 "traddr": "10.0.0.1", 00:19:38.902 "trsvcid": "33666" 00:19:38.902 }, 00:19:38.902 "auth": { 00:19:38.902 "state": "completed", 00:19:38.902 "digest": "sha256", 00:19:38.902 "dhgroup": "ffdhe2048" 00:19:38.902 } 00:19:38.902 } 00:19:38.902 ]' 00:19:38.902 03:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:39.161 03:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:39.161 03:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:39.161 03:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:39.161 03:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:39.161 03:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.161 03:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.161 03:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.420 03:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZmZDUzN2JjMjEwNDQ1ZTg1ZDI5ZTFhNTU5ODczZjEuFw/4: 00:19:39.420 03:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZmZDUzN2JjMjEwNDQ1ZTg1ZDI5ZTFhNTU5ODczZjEuFw/4: 00:19:39.988 03:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.988 03:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:39.988 03:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.988 03:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.988 03:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.988 03:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.988 03:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:39.988 03:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:39.988 03:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:39.988 03:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.988 03:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:39.988 03:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:39.988 03:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:39.988 03:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.988 03:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:39.988 03:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.988 03:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.988 03:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.988 03:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:39.988 03:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:39.989 03:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:40.247 00:19:40.247 03:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.247 03:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.247 03:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.505 03:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.505 03:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.505 03:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.505 03:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.505 03:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.505 03:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.505 { 00:19:40.505 "cntlid": 15, 00:19:40.505 "qid": 0, 00:19:40.505 "state": "enabled", 00:19:40.505 "thread": "nvmf_tgt_poll_group_000", 00:19:40.505 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:40.505 "listen_address": { 00:19:40.505 "trtype": "TCP", 00:19:40.505 "adrfam": "IPv4", 00:19:40.505 "traddr": "10.0.0.2", 00:19:40.505 "trsvcid": "4420" 00:19:40.505 }, 00:19:40.505 "peer_address": { 00:19:40.505 "trtype": "TCP", 00:19:40.505 "adrfam": "IPv4", 00:19:40.505 "traddr": "10.0.0.1", 00:19:40.505 "trsvcid": "45222" 00:19:40.505 }, 00:19:40.505 "auth": { 00:19:40.505 "state": "completed", 00:19:40.505 "digest": "sha256", 00:19:40.505 "dhgroup": "ffdhe2048" 00:19:40.505 } 00:19:40.505 } 00:19:40.505 ]' 00:19:40.505 03:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.505 03:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.505 03:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.505 03:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:40.505 03:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.764 03:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.764 03:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.764 03:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.764 03:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:19:40.764 03:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:19:41.332 03:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.332 03:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:41.332 03:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.332 03:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.332 03:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.332 03:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:41.332 03:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.332 03:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:41.332 03:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:41.591 03:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:41.591 03:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.591 03:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:41.591 03:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:41.591 03:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:41.591 03:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.591 03:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.591 03:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.591 03:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.591 03:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.591 03:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.591 03:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.591 03:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.850 00:19:41.850 03:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:41.850 03:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:41.850 03:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.109 03:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.109 03:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.109 03:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.109 03:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.109 03:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.109 03:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.109 { 00:19:42.109 "cntlid": 17, 00:19:42.109 "qid": 0, 00:19:42.109 "state": "enabled", 00:19:42.109 "thread": "nvmf_tgt_poll_group_000", 00:19:42.109 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:42.109 "listen_address": { 00:19:42.109 "trtype": "TCP", 00:19:42.109 "adrfam": "IPv4", 00:19:42.109 "traddr": "10.0.0.2", 00:19:42.109 "trsvcid": "4420" 00:19:42.109 }, 00:19:42.109 "peer_address": { 00:19:42.109 "trtype": "TCP", 00:19:42.109 "adrfam": "IPv4", 00:19:42.109 "traddr": "10.0.0.1", 00:19:42.109 "trsvcid": "45246" 00:19:42.109 }, 00:19:42.109 "auth": { 00:19:42.109 "state": "completed", 00:19:42.109 "digest": "sha256", 00:19:42.109 "dhgroup": "ffdhe3072" 00:19:42.109 } 00:19:42.109 } 00:19:42.109 ]' 00:19:42.109 03:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.109 03:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.109 03:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.109 03:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:42.109 03:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.369 03:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.369 03:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.369 03:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.369 03:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:19:42.369 03:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:19:42.937 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.937 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:42.937 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.937 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.937 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.937 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:42.937 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:42.937 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:43.196 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:43.196 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:43.196 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:43.196 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:43.196 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:43.196 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.196 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.196 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.196 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.196 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.196 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.196 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.196 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.455 00:19:43.455 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:43.455 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:43.455 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.714 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.714 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.714 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.714 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.714 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.714 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:43.714 { 00:19:43.714 "cntlid": 19, 00:19:43.714 "qid": 0, 00:19:43.714 "state": "enabled", 00:19:43.714 "thread": "nvmf_tgt_poll_group_000", 00:19:43.714 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:43.714 "listen_address": { 00:19:43.714 "trtype": "TCP", 00:19:43.714 "adrfam": "IPv4", 00:19:43.714 "traddr": "10.0.0.2", 00:19:43.714 "trsvcid": "4420" 00:19:43.714 }, 00:19:43.714 "peer_address": { 00:19:43.714 "trtype": "TCP", 00:19:43.714 "adrfam": "IPv4", 00:19:43.714 "traddr": "10.0.0.1", 00:19:43.714 "trsvcid": "45276" 00:19:43.714 }, 00:19:43.714 "auth": { 00:19:43.714 "state": "completed", 00:19:43.714 "digest": "sha256", 00:19:43.714 "dhgroup": "ffdhe3072" 00:19:43.714 } 00:19:43.714 } 00:19:43.714 ]' 00:19:43.714 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:43.714 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:43.714 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:43.714 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:43.714 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:43.714 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.714 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.714 03:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.973 03:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: --dhchap-ctrl-secret DHHC-1:02:ZmJlMWNmMjAzOTRmYTMwNDY1ZjA3ZTNmYjBhOTNlNGQzMDQyMGJkNzZmYjU2MDMwspixgw==: 00:19:43.973 03:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: --dhchap-ctrl-secret DHHC-1:02:ZmJlMWNmMjAzOTRmYTMwNDY1ZjA3ZTNmYjBhOTNlNGQzMDQyMGJkNzZmYjU2MDMwspixgw==: 00:19:44.541 03:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.542 03:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:44.542 03:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.542 03:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.542 03:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.542 03:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.542 03:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:44.542 03:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:44.801 03:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:44.801 03:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.801 03:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:44.801 03:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:44.801 03:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:44.801 03:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.801 03:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.801 03:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.801 03:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.801 03:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.801 03:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.801 03:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.801 03:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.074 00:19:45.074 03:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.074 03:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.074 03:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.333 03:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.333 03:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.333 03:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.333 03:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.333 03:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.333 03:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.333 { 00:19:45.333 "cntlid": 21, 00:19:45.333 "qid": 0, 00:19:45.333 "state": "enabled", 00:19:45.333 "thread": "nvmf_tgt_poll_group_000", 00:19:45.333 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:45.333 "listen_address": { 00:19:45.333 "trtype": "TCP", 00:19:45.333 "adrfam": "IPv4", 00:19:45.333 "traddr": "10.0.0.2", 00:19:45.333 "trsvcid": "4420" 00:19:45.333 }, 00:19:45.333 "peer_address": { 00:19:45.333 "trtype": "TCP", 00:19:45.333 "adrfam": "IPv4", 00:19:45.333 "traddr": "10.0.0.1", 00:19:45.333 "trsvcid": "45294" 00:19:45.333 }, 00:19:45.333 "auth": { 00:19:45.333 "state": "completed", 00:19:45.333 "digest": "sha256", 00:19:45.333 "dhgroup": "ffdhe3072" 00:19:45.333 } 00:19:45.333 } 00:19:45.333 ]' 00:19:45.333 03:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.333 03:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.333 03:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.333 03:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:45.333 03:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.333 03:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.333 03:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.333 03:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.592 03:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZmZDUzN2JjMjEwNDQ1ZTg1ZDI5ZTFhNTU5ODczZjEuFw/4: 00:19:45.592 03:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZmZDUzN2JjMjEwNDQ1ZTg1ZDI5ZTFhNTU5ODczZjEuFw/4: 00:19:46.161 03:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.161 03:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:46.161 03:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.161 03:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.161 03:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.161 03:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.161 03:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:46.161 03:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:46.420 03:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:46.420 03:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.420 03:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:46.420 03:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:46.420 03:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:46.420 03:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.420 03:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:46.420 03:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.420 03:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.420 03:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.420 03:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:46.420 03:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:46.420 03:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:46.679 00:19:46.679 03:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.679 03:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.679 03:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.938 03:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.938 03:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.938 03:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.938 03:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.938 03:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.938 03:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:46.938 { 00:19:46.938 "cntlid": 23, 00:19:46.938 "qid": 0, 00:19:46.938 "state": "enabled", 00:19:46.938 "thread": "nvmf_tgt_poll_group_000", 00:19:46.938 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:46.938 "listen_address": { 00:19:46.938 "trtype": "TCP", 00:19:46.938 "adrfam": "IPv4", 00:19:46.938 "traddr": "10.0.0.2", 00:19:46.938 "trsvcid": "4420" 00:19:46.938 }, 00:19:46.938 "peer_address": { 00:19:46.938 "trtype": "TCP", 00:19:46.938 "adrfam": "IPv4", 00:19:46.938 "traddr": "10.0.0.1", 00:19:46.938 "trsvcid": "45328" 00:19:46.938 }, 00:19:46.938 "auth": { 00:19:46.938 "state": "completed", 00:19:46.938 "digest": "sha256", 00:19:46.938 "dhgroup": "ffdhe3072" 00:19:46.938 } 00:19:46.938 } 00:19:46.938 ]' 00:19:46.938 03:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:46.938 03:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:46.938 03:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:46.938 03:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:46.938 03:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:46.938 03:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.938 03:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.938 03:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.196 03:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:19:47.197 03:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:19:47.764 03:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.764 03:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:47.764 03:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.764 03:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.764 03:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.764 03:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:47.764 03:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:47.764 03:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:47.764 03:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:48.023 03:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:48.023 03:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.023 03:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:48.023 03:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:48.023 03:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:48.023 03:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.023 03:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.023 03:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.023 03:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.023 03:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.023 03:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.023 03:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.023 03:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.282 00:19:48.282 03:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.282 03:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.282 03:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.541 03:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.541 03:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.541 03:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.541 03:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.541 03:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.541 03:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:48.541 { 00:19:48.541 "cntlid": 25, 00:19:48.541 "qid": 0, 00:19:48.541 "state": "enabled", 00:19:48.541 "thread": "nvmf_tgt_poll_group_000", 00:19:48.541 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:48.541 "listen_address": { 00:19:48.541 "trtype": "TCP", 00:19:48.541 "adrfam": "IPv4", 00:19:48.541 "traddr": "10.0.0.2", 00:19:48.541 "trsvcid": "4420" 00:19:48.541 }, 00:19:48.541 "peer_address": { 00:19:48.541 "trtype": "TCP", 00:19:48.541 "adrfam": "IPv4", 00:19:48.541 "traddr": "10.0.0.1", 00:19:48.541 "trsvcid": "45360" 00:19:48.541 }, 00:19:48.541 "auth": { 00:19:48.541 "state": "completed", 00:19:48.541 "digest": "sha256", 00:19:48.541 "dhgroup": "ffdhe4096" 00:19:48.541 } 00:19:48.541 } 00:19:48.541 ]' 00:19:48.541 03:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:48.542 03:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.542 03:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:48.542 03:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:48.542 03:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:48.542 03:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.542 03:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.542 03:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.801 03:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:19:48.801 03:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:19:49.369 03:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.369 03:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:49.369 03:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.369 03:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.369 03:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.369 03:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:49.369 03:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:49.369 03:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:49.628 03:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:49.628 03:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:49.628 03:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:49.628 03:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:49.628 03:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:49.628 03:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.628 03:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.628 03:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.628 03:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.628 03:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.628 03:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.628 03:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.628 03:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.887 00:19:49.887 03:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.887 03:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:49.887 03:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.147 03:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.147 03:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.147 03:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.147 03:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.147 03:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.147 03:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.147 { 00:19:50.147 "cntlid": 27, 00:19:50.147 "qid": 0, 00:19:50.147 "state": "enabled", 00:19:50.147 "thread": "nvmf_tgt_poll_group_000", 00:19:50.147 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:50.147 "listen_address": { 00:19:50.147 "trtype": "TCP", 00:19:50.147 "adrfam": "IPv4", 00:19:50.147 "traddr": "10.0.0.2", 00:19:50.147 "trsvcid": "4420" 00:19:50.147 }, 00:19:50.147 "peer_address": { 00:19:50.147 "trtype": "TCP", 00:19:50.147 "adrfam": "IPv4", 00:19:50.147 "traddr": "10.0.0.1", 00:19:50.147 "trsvcid": "46114" 00:19:50.147 }, 00:19:50.147 "auth": { 00:19:50.147 "state": "completed", 00:19:50.147 "digest": "sha256", 00:19:50.147 "dhgroup": "ffdhe4096" 00:19:50.147 } 00:19:50.147 } 00:19:50.147 ]' 00:19:50.147 03:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.147 03:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.147 03:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.147 03:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:50.147 03:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.147 03:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.147 03:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.147 03:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.406 03:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: --dhchap-ctrl-secret DHHC-1:02:ZmJlMWNmMjAzOTRmYTMwNDY1ZjA3ZTNmYjBhOTNlNGQzMDQyMGJkNzZmYjU2MDMwspixgw==: 00:19:50.406 03:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: --dhchap-ctrl-secret DHHC-1:02:ZmJlMWNmMjAzOTRmYTMwNDY1ZjA3ZTNmYjBhOTNlNGQzMDQyMGJkNzZmYjU2MDMwspixgw==: 00:19:50.973 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.973 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:50.973 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.973 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.973 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.973 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.973 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:50.973 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:51.233 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:51.233 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.233 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:51.233 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:51.233 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:51.233 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.233 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.233 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.233 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.233 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.233 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.233 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.233 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.492 00:19:51.492 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.492 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.492 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.751 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.751 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.751 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.751 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.751 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.751 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.751 { 00:19:51.751 "cntlid": 29, 00:19:51.751 "qid": 0, 00:19:51.751 "state": "enabled", 00:19:51.751 "thread": "nvmf_tgt_poll_group_000", 00:19:51.751 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:51.751 "listen_address": { 00:19:51.751 "trtype": "TCP", 00:19:51.751 "adrfam": "IPv4", 00:19:51.751 "traddr": "10.0.0.2", 00:19:51.751 "trsvcid": "4420" 00:19:51.751 }, 00:19:51.751 "peer_address": { 00:19:51.751 "trtype": "TCP", 00:19:51.751 "adrfam": "IPv4", 00:19:51.751 "traddr": "10.0.0.1", 00:19:51.751 "trsvcid": "46150" 00:19:51.751 }, 00:19:51.751 "auth": { 00:19:51.751 "state": "completed", 00:19:51.751 "digest": "sha256", 00:19:51.751 "dhgroup": "ffdhe4096" 00:19:51.751 } 00:19:51.751 } 00:19:51.751 ]' 00:19:51.751 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.751 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:51.751 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.751 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:51.751 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.751 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.751 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.751 03:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.010 03:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZmZDUzN2JjMjEwNDQ1ZTg1ZDI5ZTFhNTU5ODczZjEuFw/4: 00:19:52.010 03:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZmZDUzN2JjMjEwNDQ1ZTg1ZDI5ZTFhNTU5ODczZjEuFw/4: 00:19:52.578 03:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.578 03:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:52.578 03:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.578 03:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.578 03:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.578 03:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.578 03:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:52.578 03:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:52.838 03:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:52.838 03:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.838 03:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:52.838 03:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:52.838 03:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:52.838 03:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.838 03:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:52.838 03:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.838 03:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.838 03:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.838 03:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:52.838 03:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:52.838 03:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:53.097 00:19:53.097 03:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.097 03:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.097 03:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.356 03:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.356 03:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.356 03:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.356 03:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.356 03:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.356 03:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.356 { 00:19:53.356 "cntlid": 31, 00:19:53.356 "qid": 0, 00:19:53.356 "state": "enabled", 00:19:53.356 "thread": "nvmf_tgt_poll_group_000", 00:19:53.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:53.356 "listen_address": { 00:19:53.356 "trtype": "TCP", 00:19:53.356 "adrfam": "IPv4", 00:19:53.356 "traddr": "10.0.0.2", 00:19:53.356 "trsvcid": "4420" 00:19:53.356 }, 00:19:53.356 "peer_address": { 00:19:53.356 "trtype": "TCP", 00:19:53.356 "adrfam": "IPv4", 00:19:53.356 "traddr": "10.0.0.1", 00:19:53.356 "trsvcid": "46170" 00:19:53.356 }, 00:19:53.356 "auth": { 00:19:53.356 "state": "completed", 00:19:53.356 "digest": "sha256", 00:19:53.356 "dhgroup": "ffdhe4096" 00:19:53.356 } 00:19:53.356 } 00:19:53.356 ]' 00:19:53.356 03:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.356 03:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.356 03:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.356 03:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:53.356 03:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.356 03:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.356 03:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.356 03:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.615 03:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:19:53.615 03:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:19:54.183 03:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.183 03:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:54.183 03:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.183 03:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.183 03:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.183 03:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:54.183 03:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.183 03:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:54.183 03:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:54.442 03:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:54.442 03:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.442 03:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:54.442 03:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:54.442 03:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:54.442 03:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.442 03:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.442 03:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.442 03:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.442 03:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.442 03:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.442 03:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.442 03:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.701 00:19:54.701 03:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:54.701 03:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:54.701 03:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.961 03:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.961 03:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.961 03:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.961 03:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.961 03:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.961 03:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.961 { 00:19:54.961 "cntlid": 33, 00:19:54.961 "qid": 0, 00:19:54.961 "state": "enabled", 00:19:54.961 "thread": "nvmf_tgt_poll_group_000", 00:19:54.961 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:54.961 "listen_address": { 00:19:54.961 "trtype": "TCP", 00:19:54.961 "adrfam": "IPv4", 00:19:54.961 "traddr": "10.0.0.2", 00:19:54.961 "trsvcid": "4420" 00:19:54.961 }, 00:19:54.961 "peer_address": { 00:19:54.961 "trtype": "TCP", 00:19:54.961 "adrfam": "IPv4", 00:19:54.961 "traddr": "10.0.0.1", 00:19:54.961 "trsvcid": "46196" 00:19:54.961 }, 00:19:54.961 "auth": { 00:19:54.961 "state": "completed", 00:19:54.961 "digest": "sha256", 00:19:54.961 "dhgroup": "ffdhe6144" 00:19:54.961 } 00:19:54.961 } 00:19:54.961 ]' 00:19:54.961 03:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.961 03:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:54.961 03:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.961 03:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:54.961 03:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:55.220 03:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.220 03:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.220 03:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.220 03:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:19:55.220 03:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:19:55.789 03:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.789 03:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:55.789 03:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.789 03:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.789 03:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.789 03:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.789 03:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:55.789 03:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:56.048 03:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:56.048 03:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:56.048 03:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:56.048 03:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:56.048 03:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:56.048 03:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.048 03:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.048 03:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.048 03:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.048 03:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.048 03:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.048 03:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.048 03:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.307 00:19:56.307 03:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.307 03:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.307 03:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.566 03:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.566 03:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.566 03:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.566 03:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.566 03:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.566 03:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.566 { 00:19:56.566 "cntlid": 35, 00:19:56.566 "qid": 0, 00:19:56.566 "state": "enabled", 00:19:56.566 "thread": "nvmf_tgt_poll_group_000", 00:19:56.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:56.566 "listen_address": { 00:19:56.566 "trtype": "TCP", 00:19:56.566 "adrfam": "IPv4", 00:19:56.566 "traddr": "10.0.0.2", 00:19:56.566 "trsvcid": "4420" 00:19:56.566 }, 00:19:56.566 "peer_address": { 00:19:56.566 "trtype": "TCP", 00:19:56.566 "adrfam": "IPv4", 00:19:56.566 "traddr": "10.0.0.1", 00:19:56.566 "trsvcid": "46224" 00:19:56.566 }, 00:19:56.566 "auth": { 00:19:56.566 "state": "completed", 00:19:56.566 "digest": "sha256", 00:19:56.566 "dhgroup": "ffdhe6144" 00:19:56.566 } 00:19:56.566 } 00:19:56.566 ]' 00:19:56.566 03:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.566 03:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.566 03:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.825 03:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:56.825 03:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.825 03:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.825 03:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.825 03:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.825 03:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: --dhchap-ctrl-secret DHHC-1:02:ZmJlMWNmMjAzOTRmYTMwNDY1ZjA3ZTNmYjBhOTNlNGQzMDQyMGJkNzZmYjU2MDMwspixgw==: 00:19:56.825 03:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: --dhchap-ctrl-secret DHHC-1:02:ZmJlMWNmMjAzOTRmYTMwNDY1ZjA3ZTNmYjBhOTNlNGQzMDQyMGJkNzZmYjU2MDMwspixgw==: 00:19:57.393 03:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.393 03:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:57.393 03:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.393 03:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.393 03:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.393 03:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.393 03:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:57.393 03:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:57.653 03:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:57.653 03:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.653 03:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:57.653 03:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:57.653 03:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:57.653 03:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.653 03:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.653 03:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.653 03:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.653 03:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.653 03:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.653 03:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.653 03:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.912 00:19:58.171 03:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.171 03:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.171 03:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.171 03:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.171 03:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.171 03:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.171 03:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.171 03:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.171 03:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.171 { 00:19:58.171 "cntlid": 37, 00:19:58.171 "qid": 0, 00:19:58.171 "state": "enabled", 00:19:58.171 "thread": "nvmf_tgt_poll_group_000", 00:19:58.171 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:58.171 "listen_address": { 00:19:58.171 "trtype": "TCP", 00:19:58.171 "adrfam": "IPv4", 00:19:58.171 "traddr": "10.0.0.2", 00:19:58.171 "trsvcid": "4420" 00:19:58.171 }, 00:19:58.171 "peer_address": { 00:19:58.171 "trtype": "TCP", 00:19:58.171 "adrfam": "IPv4", 00:19:58.171 "traddr": "10.0.0.1", 00:19:58.171 "trsvcid": "46258" 00:19:58.171 }, 00:19:58.171 "auth": { 00:19:58.171 "state": "completed", 00:19:58.171 "digest": "sha256", 00:19:58.171 "dhgroup": "ffdhe6144" 00:19:58.171 } 00:19:58.171 } 00:19:58.171 ]' 00:19:58.171 03:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.171 03:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.171 03:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.430 03:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:58.430 03:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.430 03:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.430 03:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.430 03:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.430 03:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZmZDUzN2JjMjEwNDQ1ZTg1ZDI5ZTFhNTU5ODczZjEuFw/4: 00:19:58.430 03:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZmZDUzN2JjMjEwNDQ1ZTg1ZDI5ZTFhNTU5ODczZjEuFw/4: 00:19:58.998 03:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.257 03:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:59.257 03:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.257 03:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.257 03:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.257 03:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.257 03:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:59.257 03:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:59.257 03:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:59.257 03:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:59.257 03:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:59.257 03:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:59.257 03:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:59.257 03:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.258 03:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:59.258 03:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.258 03:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.258 03:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.258 03:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:59.258 03:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:59.258 03:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:59.824 00:19:59.824 03:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.824 03:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.824 03:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.824 03:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.824 03:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.824 03:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.824 03:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.824 03:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.824 03:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:59.824 { 00:19:59.824 "cntlid": 39, 00:19:59.824 "qid": 0, 00:19:59.824 "state": "enabled", 00:19:59.824 "thread": "nvmf_tgt_poll_group_000", 00:19:59.824 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:59.824 "listen_address": { 00:19:59.824 "trtype": "TCP", 00:19:59.824 "adrfam": "IPv4", 00:19:59.824 "traddr": "10.0.0.2", 00:19:59.824 "trsvcid": "4420" 00:19:59.824 }, 00:19:59.824 "peer_address": { 00:19:59.824 "trtype": "TCP", 00:19:59.824 "adrfam": "IPv4", 00:19:59.824 "traddr": "10.0.0.1", 00:19:59.824 "trsvcid": "45858" 00:19:59.824 }, 00:19:59.824 "auth": { 00:19:59.824 "state": "completed", 00:19:59.824 "digest": "sha256", 00:19:59.824 "dhgroup": "ffdhe6144" 00:19:59.824 } 00:19:59.824 } 00:19:59.824 ]' 00:19:59.824 03:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.082 03:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.082 03:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.082 03:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:00.082 03:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.082 03:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.082 03:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.082 03:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.341 03:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:20:00.341 03:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:20:00.910 03:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.910 03:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:00.910 03:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.910 03:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.910 03:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.910 03:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:00.910 03:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:00.910 03:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:00.910 03:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:00.910 03:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:00.910 03:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.910 03:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:00.910 03:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:00.910 03:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:00.910 03:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.910 03:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.910 03:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.910 03:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.910 03:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.910 03:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.910 03:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.911 03:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.479 00:20:01.479 03:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:01.479 03:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.479 03:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:01.804 03:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.805 03:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.805 03:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.805 03:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.805 03:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.805 03:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:01.805 { 00:20:01.805 "cntlid": 41, 00:20:01.805 "qid": 0, 00:20:01.805 "state": "enabled", 00:20:01.805 "thread": "nvmf_tgt_poll_group_000", 00:20:01.805 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:01.805 "listen_address": { 00:20:01.805 "trtype": "TCP", 00:20:01.805 "adrfam": "IPv4", 00:20:01.805 "traddr": "10.0.0.2", 00:20:01.805 "trsvcid": "4420" 00:20:01.805 }, 00:20:01.805 "peer_address": { 00:20:01.805 "trtype": "TCP", 00:20:01.805 "adrfam": "IPv4", 00:20:01.805 "traddr": "10.0.0.1", 00:20:01.805 "trsvcid": "45904" 00:20:01.805 }, 00:20:01.805 "auth": { 00:20:01.805 "state": "completed", 00:20:01.805 "digest": "sha256", 00:20:01.805 "dhgroup": "ffdhe8192" 00:20:01.805 } 00:20:01.805 } 00:20:01.805 ]' 00:20:01.805 03:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:01.805 03:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:01.805 03:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:01.805 03:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:01.805 03:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:01.805 03:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.805 03:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.805 03:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.087 03:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:20:02.087 03:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:20:02.655 03:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.655 03:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:02.655 03:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.655 03:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.655 03:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.655 03:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.655 03:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:02.655 03:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:02.914 03:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:02.915 03:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.915 03:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:02.915 03:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:02.915 03:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:02.915 03:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.915 03:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.915 03:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.915 03:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.915 03:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.915 03:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.915 03:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.915 03:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.174 00:20:03.433 03:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:03.433 03:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:03.433 03:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.433 03:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.433 03:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.433 03:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.433 03:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.433 03:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.433 03:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.433 { 00:20:03.433 "cntlid": 43, 00:20:03.433 "qid": 0, 00:20:03.433 "state": "enabled", 00:20:03.433 "thread": "nvmf_tgt_poll_group_000", 00:20:03.433 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:03.433 "listen_address": { 00:20:03.433 "trtype": "TCP", 00:20:03.433 "adrfam": "IPv4", 00:20:03.433 "traddr": "10.0.0.2", 00:20:03.433 "trsvcid": "4420" 00:20:03.433 }, 00:20:03.433 "peer_address": { 00:20:03.433 "trtype": "TCP", 00:20:03.433 "adrfam": "IPv4", 00:20:03.433 "traddr": "10.0.0.1", 00:20:03.433 "trsvcid": "45936" 00:20:03.433 }, 00:20:03.433 "auth": { 00:20:03.433 "state": "completed", 00:20:03.433 "digest": "sha256", 00:20:03.433 "dhgroup": "ffdhe8192" 00:20:03.433 } 00:20:03.433 } 00:20:03.433 ]' 00:20:03.433 03:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.692 03:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.692 03:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.692 03:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:03.692 03:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.692 03:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.692 03:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.692 03:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.951 03:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: --dhchap-ctrl-secret DHHC-1:02:ZmJlMWNmMjAzOTRmYTMwNDY1ZjA3ZTNmYjBhOTNlNGQzMDQyMGJkNzZmYjU2MDMwspixgw==: 00:20:03.951 03:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: --dhchap-ctrl-secret DHHC-1:02:ZmJlMWNmMjAzOTRmYTMwNDY1ZjA3ZTNmYjBhOTNlNGQzMDQyMGJkNzZmYjU2MDMwspixgw==: 00:20:04.518 03:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.518 03:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:04.518 03:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.518 03:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.518 03:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.518 03:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.518 03:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:04.518 03:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:04.518 03:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:04.518 03:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.518 03:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:04.518 03:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:04.518 03:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:04.518 03:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.518 03:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.518 03:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.518 03:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.518 03:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.518 03:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.518 03:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.518 03:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.086 00:20:05.086 03:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.086 03:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.086 03:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.345 03:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.345 03:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.345 03:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.345 03:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.345 03:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.345 03:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.345 { 00:20:05.345 "cntlid": 45, 00:20:05.345 "qid": 0, 00:20:05.345 "state": "enabled", 00:20:05.345 "thread": "nvmf_tgt_poll_group_000", 00:20:05.345 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:05.345 "listen_address": { 00:20:05.345 "trtype": "TCP", 00:20:05.345 "adrfam": "IPv4", 00:20:05.345 "traddr": "10.0.0.2", 00:20:05.345 "trsvcid": "4420" 00:20:05.345 }, 00:20:05.345 "peer_address": { 00:20:05.345 "trtype": "TCP", 00:20:05.345 "adrfam": "IPv4", 00:20:05.345 "traddr": "10.0.0.1", 00:20:05.345 "trsvcid": "45954" 00:20:05.345 }, 00:20:05.345 "auth": { 00:20:05.345 "state": "completed", 00:20:05.345 "digest": "sha256", 00:20:05.345 "dhgroup": "ffdhe8192" 00:20:05.345 } 00:20:05.345 } 00:20:05.345 ]' 00:20:05.345 03:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.345 03:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:05.345 03:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.345 03:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:05.345 03:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.345 03:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.345 03:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.345 03:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.604 03:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZmZDUzN2JjMjEwNDQ1ZTg1ZDI5ZTFhNTU5ODczZjEuFw/4: 00:20:05.604 03:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZmZDUzN2JjMjEwNDQ1ZTg1ZDI5ZTFhNTU5ODczZjEuFw/4: 00:20:06.172 03:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.172 03:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:06.172 03:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.172 03:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.172 03:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.172 03:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.172 03:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:06.172 03:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:06.431 03:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:06.431 03:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.431 03:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:06.431 03:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:06.431 03:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:06.431 03:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.431 03:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:06.431 03:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.431 03:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.431 03:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.431 03:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:06.431 03:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:06.431 03:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:06.999 00:20:06.999 03:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:06.999 03:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:06.999 03:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.999 03:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.999 03:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.999 03:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.999 03:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.258 03:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.258 03:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.258 { 00:20:07.258 "cntlid": 47, 00:20:07.258 "qid": 0, 00:20:07.258 "state": "enabled", 00:20:07.258 "thread": "nvmf_tgt_poll_group_000", 00:20:07.258 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:07.258 "listen_address": { 00:20:07.258 "trtype": "TCP", 00:20:07.258 "adrfam": "IPv4", 00:20:07.258 "traddr": "10.0.0.2", 00:20:07.258 "trsvcid": "4420" 00:20:07.258 }, 00:20:07.258 "peer_address": { 00:20:07.258 "trtype": "TCP", 00:20:07.258 "adrfam": "IPv4", 00:20:07.258 "traddr": "10.0.0.1", 00:20:07.258 "trsvcid": "45980" 00:20:07.258 }, 00:20:07.258 "auth": { 00:20:07.258 "state": "completed", 00:20:07.258 "digest": "sha256", 00:20:07.258 "dhgroup": "ffdhe8192" 00:20:07.258 } 00:20:07.258 } 00:20:07.258 ]' 00:20:07.258 03:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.258 03:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:07.258 03:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.258 03:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:07.258 03:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.258 03:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.258 03:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.258 03:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.517 03:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:20:07.517 03:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:20:08.085 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.085 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:08.085 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.085 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.085 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.085 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:08.085 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:08.085 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.085 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:08.085 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:08.085 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:08.085 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:08.085 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:08.085 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:08.085 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:08.085 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.085 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.344 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.344 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.344 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.344 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.344 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.344 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.344 00:20:08.604 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.604 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.604 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.604 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.604 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.604 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.604 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.604 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.604 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.604 { 00:20:08.604 "cntlid": 49, 00:20:08.604 "qid": 0, 00:20:08.604 "state": "enabled", 00:20:08.604 "thread": "nvmf_tgt_poll_group_000", 00:20:08.604 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:08.604 "listen_address": { 00:20:08.604 "trtype": "TCP", 00:20:08.604 "adrfam": "IPv4", 00:20:08.604 "traddr": "10.0.0.2", 00:20:08.604 "trsvcid": "4420" 00:20:08.604 }, 00:20:08.604 "peer_address": { 00:20:08.604 "trtype": "TCP", 00:20:08.604 "adrfam": "IPv4", 00:20:08.604 "traddr": "10.0.0.1", 00:20:08.604 "trsvcid": "46026" 00:20:08.604 }, 00:20:08.604 "auth": { 00:20:08.604 "state": "completed", 00:20:08.604 "digest": "sha384", 00:20:08.604 "dhgroup": "null" 00:20:08.604 } 00:20:08.604 } 00:20:08.604 ]' 00:20:08.604 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.604 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:08.863 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.863 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:08.863 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.863 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.863 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.863 03:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.122 03:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:20:09.122 03:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:20:09.689 03:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.689 03:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:09.689 03:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.689 03:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.689 03:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.689 03:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.689 03:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:09.689 03:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:09.689 03:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:09.689 03:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.689 03:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:09.689 03:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:09.689 03:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:09.689 03:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.689 03:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.689 03:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.689 03:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.689 03:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.689 03:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.689 03:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.689 03:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.948 00:20:09.948 03:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.948 03:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.948 03:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.207 03:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.207 03:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.207 03:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.207 03:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.207 03:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.207 03:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.207 { 00:20:10.207 "cntlid": 51, 00:20:10.207 "qid": 0, 00:20:10.207 "state": "enabled", 00:20:10.207 "thread": "nvmf_tgt_poll_group_000", 00:20:10.207 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:10.207 "listen_address": { 00:20:10.207 "trtype": "TCP", 00:20:10.207 "adrfam": "IPv4", 00:20:10.207 "traddr": "10.0.0.2", 00:20:10.207 "trsvcid": "4420" 00:20:10.207 }, 00:20:10.207 "peer_address": { 00:20:10.207 "trtype": "TCP", 00:20:10.207 "adrfam": "IPv4", 00:20:10.207 "traddr": "10.0.0.1", 00:20:10.207 "trsvcid": "40188" 00:20:10.207 }, 00:20:10.207 "auth": { 00:20:10.207 "state": "completed", 00:20:10.207 "digest": "sha384", 00:20:10.207 "dhgroup": "null" 00:20:10.207 } 00:20:10.207 } 00:20:10.207 ]' 00:20:10.207 03:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.207 03:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.207 03:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.207 03:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:10.207 03:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.467 03:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.467 03:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.467 03:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.467 03:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: --dhchap-ctrl-secret DHHC-1:02:ZmJlMWNmMjAzOTRmYTMwNDY1ZjA3ZTNmYjBhOTNlNGQzMDQyMGJkNzZmYjU2MDMwspixgw==: 00:20:10.467 03:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: --dhchap-ctrl-secret DHHC-1:02:ZmJlMWNmMjAzOTRmYTMwNDY1ZjA3ZTNmYjBhOTNlNGQzMDQyMGJkNzZmYjU2MDMwspixgw==: 00:20:11.034 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.034 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:11.034 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.034 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.034 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.035 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.035 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:11.035 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:11.293 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:11.293 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.293 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:11.293 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:11.293 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:11.293 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.293 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.293 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.293 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.293 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.293 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.293 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.294 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.553 00:20:11.553 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.553 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.553 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.812 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.812 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.812 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.812 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.812 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.812 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.812 { 00:20:11.812 "cntlid": 53, 00:20:11.812 "qid": 0, 00:20:11.812 "state": "enabled", 00:20:11.812 "thread": "nvmf_tgt_poll_group_000", 00:20:11.812 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:11.812 "listen_address": { 00:20:11.812 "trtype": "TCP", 00:20:11.812 "adrfam": "IPv4", 00:20:11.812 "traddr": "10.0.0.2", 00:20:11.812 "trsvcid": "4420" 00:20:11.812 }, 00:20:11.812 "peer_address": { 00:20:11.812 "trtype": "TCP", 00:20:11.812 "adrfam": "IPv4", 00:20:11.812 "traddr": "10.0.0.1", 00:20:11.812 "trsvcid": "40220" 00:20:11.812 }, 00:20:11.812 "auth": { 00:20:11.812 "state": "completed", 00:20:11.812 "digest": "sha384", 00:20:11.812 "dhgroup": "null" 00:20:11.812 } 00:20:11.812 } 00:20:11.812 ]' 00:20:11.812 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.812 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:11.812 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.812 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:11.812 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.812 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.812 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.812 03:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.071 03:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZmZDUzN2JjMjEwNDQ1ZTg1ZDI5ZTFhNTU5ODczZjEuFw/4: 00:20:12.071 03:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZmZDUzN2JjMjEwNDQ1ZTg1ZDI5ZTFhNTU5ODczZjEuFw/4: 00:20:12.639 03:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.639 03:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:12.639 03:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.639 03:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.639 03:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.639 03:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.639 03:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:12.639 03:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:12.898 03:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:12.898 03:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.898 03:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:12.898 03:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:12.898 03:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:12.898 03:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.898 03:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:12.898 03:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.898 03:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.898 03:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.898 03:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:12.898 03:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:12.898 03:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:13.157 00:20:13.157 03:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.157 03:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.157 03:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.416 03:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.416 03:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.416 03:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.416 03:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.416 03:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.416 03:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.416 { 00:20:13.416 "cntlid": 55, 00:20:13.416 "qid": 0, 00:20:13.416 "state": "enabled", 00:20:13.416 "thread": "nvmf_tgt_poll_group_000", 00:20:13.416 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:13.416 "listen_address": { 00:20:13.416 "trtype": "TCP", 00:20:13.416 "adrfam": "IPv4", 00:20:13.416 "traddr": "10.0.0.2", 00:20:13.416 "trsvcid": "4420" 00:20:13.416 }, 00:20:13.416 "peer_address": { 00:20:13.416 "trtype": "TCP", 00:20:13.416 "adrfam": "IPv4", 00:20:13.416 "traddr": "10.0.0.1", 00:20:13.416 "trsvcid": "40254" 00:20:13.416 }, 00:20:13.416 "auth": { 00:20:13.416 "state": "completed", 00:20:13.416 "digest": "sha384", 00:20:13.416 "dhgroup": "null" 00:20:13.416 } 00:20:13.416 } 00:20:13.416 ]' 00:20:13.416 03:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.416 03:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:13.416 03:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.416 03:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:13.416 03:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.416 03:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.416 03:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.416 03:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.675 03:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:20:13.675 03:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:20:14.242 03:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.242 03:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:14.242 03:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.242 03:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.242 03:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.242 03:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:14.242 03:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.242 03:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:14.242 03:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:14.500 03:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:14.500 03:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.500 03:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:14.500 03:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:14.500 03:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:14.500 03:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.500 03:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.500 03:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.500 03:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.500 03:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.500 03:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.500 03:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.500 03:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.759 00:20:14.759 03:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.759 03:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.759 03:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.017 03:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.017 03:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.018 03:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.018 03:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.018 03:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.018 03:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.018 { 00:20:15.018 "cntlid": 57, 00:20:15.018 "qid": 0, 00:20:15.018 "state": "enabled", 00:20:15.018 "thread": "nvmf_tgt_poll_group_000", 00:20:15.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:15.018 "listen_address": { 00:20:15.018 "trtype": "TCP", 00:20:15.018 "adrfam": "IPv4", 00:20:15.018 "traddr": "10.0.0.2", 00:20:15.018 "trsvcid": "4420" 00:20:15.018 }, 00:20:15.018 "peer_address": { 00:20:15.018 "trtype": "TCP", 00:20:15.018 "adrfam": "IPv4", 00:20:15.018 "traddr": "10.0.0.1", 00:20:15.018 "trsvcid": "40270" 00:20:15.018 }, 00:20:15.018 "auth": { 00:20:15.018 "state": "completed", 00:20:15.018 "digest": "sha384", 00:20:15.018 "dhgroup": "ffdhe2048" 00:20:15.018 } 00:20:15.018 } 00:20:15.018 ]' 00:20:15.018 03:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.018 03:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.018 03:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.018 03:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:15.018 03:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.018 03:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.018 03:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.018 03:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.277 03:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:20:15.277 03:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:20:15.845 03:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.845 03:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:15.845 03:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.845 03:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.845 03:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.845 03:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.845 03:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:15.845 03:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:16.104 03:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:16.104 03:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.104 03:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:16.104 03:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:16.104 03:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:16.104 03:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.104 03:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.104 03:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.104 03:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.104 03:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.104 03:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.104 03:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.104 03:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.363 00:20:16.363 03:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.363 03:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.363 03:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.363 03:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.363 03:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.363 03:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.363 03:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.363 03:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.363 03:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.363 { 00:20:16.363 "cntlid": 59, 00:20:16.363 "qid": 0, 00:20:16.363 "state": "enabled", 00:20:16.363 "thread": "nvmf_tgt_poll_group_000", 00:20:16.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:16.363 "listen_address": { 00:20:16.363 "trtype": "TCP", 00:20:16.363 "adrfam": "IPv4", 00:20:16.364 "traddr": "10.0.0.2", 00:20:16.364 "trsvcid": "4420" 00:20:16.364 }, 00:20:16.364 "peer_address": { 00:20:16.364 "trtype": "TCP", 00:20:16.364 "adrfam": "IPv4", 00:20:16.364 "traddr": "10.0.0.1", 00:20:16.364 "trsvcid": "40300" 00:20:16.364 }, 00:20:16.364 "auth": { 00:20:16.364 "state": "completed", 00:20:16.364 "digest": "sha384", 00:20:16.364 "dhgroup": "ffdhe2048" 00:20:16.364 } 00:20:16.364 } 00:20:16.364 ]' 00:20:16.364 03:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.623 03:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:16.623 03:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.623 03:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:16.623 03:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.623 03:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.623 03:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.623 03:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.882 03:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: --dhchap-ctrl-secret DHHC-1:02:ZmJlMWNmMjAzOTRmYTMwNDY1ZjA3ZTNmYjBhOTNlNGQzMDQyMGJkNzZmYjU2MDMwspixgw==: 00:20:16.882 03:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: --dhchap-ctrl-secret DHHC-1:02:ZmJlMWNmMjAzOTRmYTMwNDY1ZjA3ZTNmYjBhOTNlNGQzMDQyMGJkNzZmYjU2MDMwspixgw==: 00:20:17.449 03:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.449 03:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:17.449 03:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.449 03:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.449 03:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.449 03:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.449 03:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:17.449 03:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:17.449 03:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:17.449 03:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.449 03:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:17.449 03:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:17.449 03:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:17.449 03:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.449 03:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.449 03:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.449 03:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.449 03:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.449 03:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.449 03:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.449 03:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.708 00:20:17.708 03:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.708 03:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.708 03:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.967 03:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.967 03:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.967 03:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.967 03:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.967 03:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.967 03:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.967 { 00:20:17.967 "cntlid": 61, 00:20:17.967 "qid": 0, 00:20:17.967 "state": "enabled", 00:20:17.967 "thread": "nvmf_tgt_poll_group_000", 00:20:17.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:17.967 "listen_address": { 00:20:17.967 "trtype": "TCP", 00:20:17.967 "adrfam": "IPv4", 00:20:17.967 "traddr": "10.0.0.2", 00:20:17.967 "trsvcid": "4420" 00:20:17.967 }, 00:20:17.967 "peer_address": { 00:20:17.967 "trtype": "TCP", 00:20:17.967 "adrfam": "IPv4", 00:20:17.967 "traddr": "10.0.0.1", 00:20:17.967 "trsvcid": "40330" 00:20:17.967 }, 00:20:17.967 "auth": { 00:20:17.967 "state": "completed", 00:20:17.967 "digest": "sha384", 00:20:17.967 "dhgroup": "ffdhe2048" 00:20:17.967 } 00:20:17.967 } 00:20:17.967 ]' 00:20:17.967 03:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.967 03:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.967 03:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.226 03:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:18.226 03:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.226 03:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.226 03:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.226 03:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.227 03:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZmZDUzN2JjMjEwNDQ1ZTg1ZDI5ZTFhNTU5ODczZjEuFw/4: 00:20:18.227 03:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZmZDUzN2JjMjEwNDQ1ZTg1ZDI5ZTFhNTU5ODczZjEuFw/4: 00:20:18.795 03:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.795 03:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:18.795 03:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.795 03:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.054 03:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.054 03:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.054 03:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:19.054 03:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:19.054 03:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:19.054 03:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.054 03:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:19.054 03:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:19.054 03:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:19.054 03:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.054 03:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:19.054 03:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.054 03:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.054 03:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.054 03:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:19.054 03:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:19.054 03:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:19.313 00:20:19.313 03:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.313 03:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.313 03:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.572 03:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.572 03:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.572 03:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.572 03:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.572 03:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.572 03:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.572 { 00:20:19.572 "cntlid": 63, 00:20:19.572 "qid": 0, 00:20:19.572 "state": "enabled", 00:20:19.572 "thread": "nvmf_tgt_poll_group_000", 00:20:19.572 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:19.572 "listen_address": { 00:20:19.572 "trtype": "TCP", 00:20:19.572 "adrfam": "IPv4", 00:20:19.572 "traddr": "10.0.0.2", 00:20:19.572 "trsvcid": "4420" 00:20:19.572 }, 00:20:19.572 "peer_address": { 00:20:19.572 "trtype": "TCP", 00:20:19.572 "adrfam": "IPv4", 00:20:19.572 "traddr": "10.0.0.1", 00:20:19.572 "trsvcid": "44096" 00:20:19.572 }, 00:20:19.572 "auth": { 00:20:19.572 "state": "completed", 00:20:19.572 "digest": "sha384", 00:20:19.572 "dhgroup": "ffdhe2048" 00:20:19.572 } 00:20:19.572 } 00:20:19.572 ]' 00:20:19.572 03:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.572 03:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.572 03:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.572 03:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:19.572 03:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.832 03:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.832 03:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.832 03:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.832 03:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:20:19.832 03:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:20:20.399 03:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.399 03:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:20.399 03:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.399 03:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.399 03:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.399 03:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:20.399 03:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.399 03:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:20.399 03:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:20.659 03:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:20.659 03:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.659 03:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:20.659 03:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:20.659 03:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:20.659 03:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.659 03:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.659 03:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.659 03:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.659 03:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.659 03:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.659 03:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.659 03:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.918 00:20:20.918 03:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:20.918 03:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:20.919 03:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.178 03:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.178 03:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.178 03:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.178 03:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.178 03:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.178 03:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.178 { 00:20:21.178 "cntlid": 65, 00:20:21.178 "qid": 0, 00:20:21.178 "state": "enabled", 00:20:21.178 "thread": "nvmf_tgt_poll_group_000", 00:20:21.178 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:21.178 "listen_address": { 00:20:21.178 "trtype": "TCP", 00:20:21.178 "adrfam": "IPv4", 00:20:21.178 "traddr": "10.0.0.2", 00:20:21.178 "trsvcid": "4420" 00:20:21.178 }, 00:20:21.178 "peer_address": { 00:20:21.178 "trtype": "TCP", 00:20:21.178 "adrfam": "IPv4", 00:20:21.178 "traddr": "10.0.0.1", 00:20:21.178 "trsvcid": "44110" 00:20:21.178 }, 00:20:21.178 "auth": { 00:20:21.178 "state": "completed", 00:20:21.178 "digest": "sha384", 00:20:21.178 "dhgroup": "ffdhe3072" 00:20:21.178 } 00:20:21.178 } 00:20:21.178 ]' 00:20:21.178 03:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.178 03:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.178 03:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.178 03:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:21.178 03:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.178 03:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.178 03:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.178 03:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.438 03:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:20:21.438 03:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:20:22.006 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.006 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:22.006 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.006 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.006 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.006 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.006 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:22.006 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:22.265 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:22.265 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.265 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:22.265 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:22.265 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:22.265 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.265 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.265 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.265 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.265 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.265 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.265 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.265 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.524 00:20:22.524 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.524 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.524 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.783 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.783 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.783 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.783 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.783 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.783 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.783 { 00:20:22.783 "cntlid": 67, 00:20:22.783 "qid": 0, 00:20:22.783 "state": "enabled", 00:20:22.783 "thread": "nvmf_tgt_poll_group_000", 00:20:22.783 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:22.783 "listen_address": { 00:20:22.783 "trtype": "TCP", 00:20:22.783 "adrfam": "IPv4", 00:20:22.783 "traddr": "10.0.0.2", 00:20:22.783 "trsvcid": "4420" 00:20:22.783 }, 00:20:22.783 "peer_address": { 00:20:22.783 "trtype": "TCP", 00:20:22.783 "adrfam": "IPv4", 00:20:22.783 "traddr": "10.0.0.1", 00:20:22.783 "trsvcid": "44142" 00:20:22.783 }, 00:20:22.783 "auth": { 00:20:22.783 "state": "completed", 00:20:22.783 "digest": "sha384", 00:20:22.783 "dhgroup": "ffdhe3072" 00:20:22.783 } 00:20:22.783 } 00:20:22.783 ]' 00:20:22.783 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.783 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:22.783 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.783 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:22.783 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.783 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.784 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.784 03:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.043 03:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: --dhchap-ctrl-secret DHHC-1:02:ZmJlMWNmMjAzOTRmYTMwNDY1ZjA3ZTNmYjBhOTNlNGQzMDQyMGJkNzZmYjU2MDMwspixgw==: 00:20:23.043 03:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: --dhchap-ctrl-secret DHHC-1:02:ZmJlMWNmMjAzOTRmYTMwNDY1ZjA3ZTNmYjBhOTNlNGQzMDQyMGJkNzZmYjU2MDMwspixgw==: 00:20:23.611 03:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.611 03:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:23.611 03:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.611 03:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.611 03:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.611 03:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.611 03:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:23.611 03:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:23.870 03:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:23.870 03:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.870 03:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:23.870 03:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:23.870 03:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:23.870 03:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.870 03:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.870 03:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.870 03:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.870 03:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.870 03:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.870 03:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.870 03:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.128 00:20:24.128 03:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.128 03:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.128 03:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.387 03:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.387 03:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.387 03:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.387 03:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.387 03:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.387 03:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.387 { 00:20:24.387 "cntlid": 69, 00:20:24.387 "qid": 0, 00:20:24.387 "state": "enabled", 00:20:24.387 "thread": "nvmf_tgt_poll_group_000", 00:20:24.387 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:24.387 "listen_address": { 00:20:24.387 "trtype": "TCP", 00:20:24.387 "adrfam": "IPv4", 00:20:24.387 "traddr": "10.0.0.2", 00:20:24.387 "trsvcid": "4420" 00:20:24.387 }, 00:20:24.387 "peer_address": { 00:20:24.387 "trtype": "TCP", 00:20:24.387 "adrfam": "IPv4", 00:20:24.387 "traddr": "10.0.0.1", 00:20:24.387 "trsvcid": "44170" 00:20:24.387 }, 00:20:24.387 "auth": { 00:20:24.387 "state": "completed", 00:20:24.387 "digest": "sha384", 00:20:24.387 "dhgroup": "ffdhe3072" 00:20:24.387 } 00:20:24.387 } 00:20:24.387 ]' 00:20:24.387 03:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.387 03:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.387 03:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.387 03:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:24.387 03:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.387 03:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.387 03:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.387 03:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.646 03:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZmZDUzN2JjMjEwNDQ1ZTg1ZDI5ZTFhNTU5ODczZjEuFw/4: 00:20:24.646 03:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZmZDUzN2JjMjEwNDQ1ZTg1ZDI5ZTFhNTU5ODczZjEuFw/4: 00:20:25.215 03:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.215 03:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:25.215 03:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.215 03:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.215 03:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.215 03:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.215 03:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:25.215 03:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:25.475 03:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:25.475 03:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.475 03:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:25.475 03:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:25.475 03:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:25.475 03:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.475 03:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:25.475 03:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.475 03:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.475 03:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.475 03:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:25.475 03:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:25.475 03:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:25.734 00:20:25.734 03:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.734 03:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.734 03:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.993 03:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.993 03:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.993 03:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.993 03:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.993 03:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.993 03:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:25.993 { 00:20:25.993 "cntlid": 71, 00:20:25.993 "qid": 0, 00:20:25.993 "state": "enabled", 00:20:25.993 "thread": "nvmf_tgt_poll_group_000", 00:20:25.993 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:25.993 "listen_address": { 00:20:25.994 "trtype": "TCP", 00:20:25.994 "adrfam": "IPv4", 00:20:25.994 "traddr": "10.0.0.2", 00:20:25.994 "trsvcid": "4420" 00:20:25.994 }, 00:20:25.994 "peer_address": { 00:20:25.994 "trtype": "TCP", 00:20:25.994 "adrfam": "IPv4", 00:20:25.994 "traddr": "10.0.0.1", 00:20:25.994 "trsvcid": "44190" 00:20:25.994 }, 00:20:25.994 "auth": { 00:20:25.994 "state": "completed", 00:20:25.994 "digest": "sha384", 00:20:25.994 "dhgroup": "ffdhe3072" 00:20:25.994 } 00:20:25.994 } 00:20:25.994 ]' 00:20:25.994 03:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:25.994 03:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:25.994 03:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.994 03:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:25.994 03:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:25.994 03:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.994 03:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.994 03:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.253 03:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:20:26.253 03:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:20:26.821 03:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.821 03:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:26.821 03:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.821 03:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.821 03:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.821 03:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:26.821 03:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:26.821 03:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:26.821 03:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:27.081 03:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:27.081 03:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.081 03:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:27.081 03:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:27.081 03:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:27.081 03:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.081 03:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.081 03:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.081 03:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.081 03:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.081 03:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.081 03:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.081 03:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.340 00:20:27.340 03:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.340 03:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.340 03:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.600 03:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.600 03:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.600 03:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.600 03:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.600 03:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.600 03:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.600 { 00:20:27.600 "cntlid": 73, 00:20:27.600 "qid": 0, 00:20:27.600 "state": "enabled", 00:20:27.600 "thread": "nvmf_tgt_poll_group_000", 00:20:27.600 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:27.600 "listen_address": { 00:20:27.600 "trtype": "TCP", 00:20:27.600 "adrfam": "IPv4", 00:20:27.600 "traddr": "10.0.0.2", 00:20:27.600 "trsvcid": "4420" 00:20:27.600 }, 00:20:27.600 "peer_address": { 00:20:27.600 "trtype": "TCP", 00:20:27.600 "adrfam": "IPv4", 00:20:27.600 "traddr": "10.0.0.1", 00:20:27.600 "trsvcid": "44216" 00:20:27.600 }, 00:20:27.600 "auth": { 00:20:27.600 "state": "completed", 00:20:27.600 "digest": "sha384", 00:20:27.600 "dhgroup": "ffdhe4096" 00:20:27.600 } 00:20:27.600 } 00:20:27.600 ]' 00:20:27.600 03:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.600 03:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.600 03:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.600 03:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:27.600 03:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.600 03:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.600 03:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.600 03:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.859 03:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:20:27.859 03:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:20:28.428 03:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.428 03:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:28.428 03:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.428 03:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.428 03:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.428 03:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.428 03:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:28.428 03:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:28.687 03:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:28.687 03:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.687 03:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:28.687 03:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:28.687 03:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:28.687 03:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.687 03:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.687 03:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.687 03:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.687 03:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.687 03:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.687 03:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.687 03:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.947 00:20:28.947 03:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.947 03:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.947 03:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.947 03:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.947 03:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.947 03:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.947 03:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.947 03:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.947 03:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.947 { 00:20:28.947 "cntlid": 75, 00:20:28.947 "qid": 0, 00:20:28.947 "state": "enabled", 00:20:28.947 "thread": "nvmf_tgt_poll_group_000", 00:20:28.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:28.947 "listen_address": { 00:20:28.947 "trtype": "TCP", 00:20:28.947 "adrfam": "IPv4", 00:20:28.947 "traddr": "10.0.0.2", 00:20:28.947 "trsvcid": "4420" 00:20:28.947 }, 00:20:28.947 "peer_address": { 00:20:28.947 "trtype": "TCP", 00:20:28.947 "adrfam": "IPv4", 00:20:28.947 "traddr": "10.0.0.1", 00:20:28.947 "trsvcid": "44254" 00:20:28.947 }, 00:20:28.947 "auth": { 00:20:28.947 "state": "completed", 00:20:28.947 "digest": "sha384", 00:20:28.947 "dhgroup": "ffdhe4096" 00:20:28.947 } 00:20:28.947 } 00:20:28.947 ]' 00:20:28.947 03:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.206 03:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.206 03:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.206 03:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:29.206 03:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.206 03:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.206 03:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.206 03:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.465 03:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: --dhchap-ctrl-secret DHHC-1:02:ZmJlMWNmMjAzOTRmYTMwNDY1ZjA3ZTNmYjBhOTNlNGQzMDQyMGJkNzZmYjU2MDMwspixgw==: 00:20:29.465 03:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: --dhchap-ctrl-secret DHHC-1:02:ZmJlMWNmMjAzOTRmYTMwNDY1ZjA3ZTNmYjBhOTNlNGQzMDQyMGJkNzZmYjU2MDMwspixgw==: 00:20:30.033 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.033 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:30.033 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.033 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.033 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.033 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.033 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:30.033 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:30.293 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:30.293 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.293 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:30.293 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:30.293 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:30.293 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.293 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.293 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.293 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.293 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.293 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.293 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.293 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.552 00:20:30.552 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.552 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.552 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.552 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.552 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.552 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.552 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.811 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.811 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.811 { 00:20:30.811 "cntlid": 77, 00:20:30.811 "qid": 0, 00:20:30.811 "state": "enabled", 00:20:30.811 "thread": "nvmf_tgt_poll_group_000", 00:20:30.811 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:30.811 "listen_address": { 00:20:30.811 "trtype": "TCP", 00:20:30.811 "adrfam": "IPv4", 00:20:30.811 "traddr": "10.0.0.2", 00:20:30.811 "trsvcid": "4420" 00:20:30.811 }, 00:20:30.811 "peer_address": { 00:20:30.811 "trtype": "TCP", 00:20:30.811 "adrfam": "IPv4", 00:20:30.811 "traddr": "10.0.0.1", 00:20:30.811 "trsvcid": "43824" 00:20:30.811 }, 00:20:30.811 "auth": { 00:20:30.811 "state": "completed", 00:20:30.811 "digest": "sha384", 00:20:30.811 "dhgroup": "ffdhe4096" 00:20:30.811 } 00:20:30.811 } 00:20:30.811 ]' 00:20:30.811 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.811 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:30.811 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.811 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:30.811 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.811 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.811 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.811 03:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.070 03:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZmZDUzN2JjMjEwNDQ1ZTg1ZDI5ZTFhNTU5ODczZjEuFw/4: 00:20:31.070 03:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZmZDUzN2JjMjEwNDQ1ZTg1ZDI5ZTFhNTU5ODczZjEuFw/4: 00:20:31.639 03:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.639 03:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:31.639 03:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.639 03:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.639 03:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.639 03:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.639 03:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:31.639 03:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:31.912 03:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:31.912 03:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.912 03:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:31.912 03:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:31.912 03:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:31.912 03:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.912 03:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:31.912 03:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.912 03:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.912 03:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.912 03:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:31.912 03:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:31.912 03:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:32.171 00:20:32.171 03:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.171 03:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.171 03:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.171 03:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.171 03:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.171 03:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.171 03:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.430 03:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.430 03:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.430 { 00:20:32.430 "cntlid": 79, 00:20:32.430 "qid": 0, 00:20:32.430 "state": "enabled", 00:20:32.430 "thread": "nvmf_tgt_poll_group_000", 00:20:32.430 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:32.430 "listen_address": { 00:20:32.430 "trtype": "TCP", 00:20:32.430 "adrfam": "IPv4", 00:20:32.430 "traddr": "10.0.0.2", 00:20:32.430 "trsvcid": "4420" 00:20:32.430 }, 00:20:32.430 "peer_address": { 00:20:32.430 "trtype": "TCP", 00:20:32.430 "adrfam": "IPv4", 00:20:32.430 "traddr": "10.0.0.1", 00:20:32.430 "trsvcid": "43854" 00:20:32.430 }, 00:20:32.430 "auth": { 00:20:32.430 "state": "completed", 00:20:32.430 "digest": "sha384", 00:20:32.430 "dhgroup": "ffdhe4096" 00:20:32.430 } 00:20:32.430 } 00:20:32.430 ]' 00:20:32.430 03:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.430 03:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.430 03:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.430 03:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:32.430 03:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.430 03:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.430 03:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.430 03:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.689 03:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:20:32.689 03:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:20:33.255 03:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.255 03:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:33.255 03:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.255 03:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.255 03:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.255 03:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:33.255 03:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.255 03:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:33.255 03:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:33.255 03:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:33.255 03:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.255 03:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:33.255 03:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:33.255 03:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:33.255 03:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.255 03:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.255 03:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.255 03:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.514 03:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.514 03:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.514 03:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.514 03:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.773 00:20:33.773 03:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.773 03:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:33.773 03:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.035 03:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.035 03:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.035 03:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.035 03:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.035 03:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.035 03:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.035 { 00:20:34.035 "cntlid": 81, 00:20:34.035 "qid": 0, 00:20:34.035 "state": "enabled", 00:20:34.035 "thread": "nvmf_tgt_poll_group_000", 00:20:34.035 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:34.035 "listen_address": { 00:20:34.035 "trtype": "TCP", 00:20:34.035 "adrfam": "IPv4", 00:20:34.035 "traddr": "10.0.0.2", 00:20:34.035 "trsvcid": "4420" 00:20:34.035 }, 00:20:34.035 "peer_address": { 00:20:34.035 "trtype": "TCP", 00:20:34.035 "adrfam": "IPv4", 00:20:34.035 "traddr": "10.0.0.1", 00:20:34.035 "trsvcid": "43888" 00:20:34.035 }, 00:20:34.035 "auth": { 00:20:34.035 "state": "completed", 00:20:34.035 "digest": "sha384", 00:20:34.035 "dhgroup": "ffdhe6144" 00:20:34.035 } 00:20:34.035 } 00:20:34.035 ]' 00:20:34.035 03:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.035 03:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.035 03:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.035 03:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:34.035 03:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.035 03:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.035 03:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.035 03:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.302 03:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:20:34.302 03:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:20:34.869 03:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.869 03:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:34.869 03:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.869 03:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.869 03:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.869 03:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.869 03:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:34.869 03:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:35.128 03:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:35.128 03:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.128 03:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:35.128 03:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:35.128 03:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:35.128 03:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.128 03:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.128 03:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.128 03:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.128 03:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.128 03:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.128 03:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.128 03:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.386 00:20:35.386 03:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.386 03:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.386 03:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.645 03:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.645 03:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.645 03:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.645 03:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.645 03:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.645 03:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.645 { 00:20:35.645 "cntlid": 83, 00:20:35.645 "qid": 0, 00:20:35.645 "state": "enabled", 00:20:35.645 "thread": "nvmf_tgt_poll_group_000", 00:20:35.645 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:35.645 "listen_address": { 00:20:35.645 "trtype": "TCP", 00:20:35.645 "adrfam": "IPv4", 00:20:35.645 "traddr": "10.0.0.2", 00:20:35.645 "trsvcid": "4420" 00:20:35.645 }, 00:20:35.645 "peer_address": { 00:20:35.645 "trtype": "TCP", 00:20:35.645 "adrfam": "IPv4", 00:20:35.645 "traddr": "10.0.0.1", 00:20:35.645 "trsvcid": "43912" 00:20:35.645 }, 00:20:35.645 "auth": { 00:20:35.645 "state": "completed", 00:20:35.645 "digest": "sha384", 00:20:35.645 "dhgroup": "ffdhe6144" 00:20:35.645 } 00:20:35.645 } 00:20:35.645 ]' 00:20:35.645 03:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.645 03:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.645 03:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.645 03:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:35.645 03:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.645 03:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.645 03:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.645 03:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.903 03:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: --dhchap-ctrl-secret DHHC-1:02:ZmJlMWNmMjAzOTRmYTMwNDY1ZjA3ZTNmYjBhOTNlNGQzMDQyMGJkNzZmYjU2MDMwspixgw==: 00:20:35.903 03:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: --dhchap-ctrl-secret DHHC-1:02:ZmJlMWNmMjAzOTRmYTMwNDY1ZjA3ZTNmYjBhOTNlNGQzMDQyMGJkNzZmYjU2MDMwspixgw==: 00:20:36.470 03:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.470 03:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:36.470 03:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.470 03:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.470 03:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.470 03:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.470 03:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:36.470 03:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:36.728 03:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:36.728 03:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.728 03:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:36.728 03:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:36.728 03:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:36.728 03:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.728 03:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.728 03:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.728 03:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.728 03:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.728 03:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.728 03:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.728 03:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.987 00:20:36.987 03:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.987 03:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.987 03:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.245 03:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.245 03:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.245 03:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.245 03:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.245 03:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.245 03:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.245 { 00:20:37.245 "cntlid": 85, 00:20:37.245 "qid": 0, 00:20:37.245 "state": "enabled", 00:20:37.245 "thread": "nvmf_tgt_poll_group_000", 00:20:37.245 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:37.245 "listen_address": { 00:20:37.245 "trtype": "TCP", 00:20:37.245 "adrfam": "IPv4", 00:20:37.245 "traddr": "10.0.0.2", 00:20:37.245 "trsvcid": "4420" 00:20:37.245 }, 00:20:37.245 "peer_address": { 00:20:37.245 "trtype": "TCP", 00:20:37.245 "adrfam": "IPv4", 00:20:37.245 "traddr": "10.0.0.1", 00:20:37.245 "trsvcid": "43948" 00:20:37.245 }, 00:20:37.245 "auth": { 00:20:37.245 "state": "completed", 00:20:37.245 "digest": "sha384", 00:20:37.245 "dhgroup": "ffdhe6144" 00:20:37.245 } 00:20:37.245 } 00:20:37.245 ]' 00:20:37.245 03:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.245 03:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.245 03:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.245 03:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:37.245 03:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.245 03:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.245 03:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.245 03:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.504 03:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZmZDUzN2JjMjEwNDQ1ZTg1ZDI5ZTFhNTU5ODczZjEuFw/4: 00:20:37.504 03:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZmZDUzN2JjMjEwNDQ1ZTg1ZDI5ZTFhNTU5ODczZjEuFw/4: 00:20:38.070 03:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.070 03:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:38.070 03:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.070 03:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.070 03:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.070 03:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.070 03:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:38.070 03:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:38.329 03:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:38.329 03:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.329 03:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:38.329 03:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:38.329 03:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:38.329 03:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.329 03:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:38.329 03:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.329 03:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.329 03:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.329 03:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:38.329 03:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:38.329 03:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:38.587 00:20:38.846 03:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.846 03:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.846 03:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.846 03:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.846 03:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.846 03:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.846 03:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.846 03:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.846 03:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:38.846 { 00:20:38.846 "cntlid": 87, 00:20:38.846 "qid": 0, 00:20:38.846 "state": "enabled", 00:20:38.846 "thread": "nvmf_tgt_poll_group_000", 00:20:38.846 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:38.846 "listen_address": { 00:20:38.846 "trtype": "TCP", 00:20:38.846 "adrfam": "IPv4", 00:20:38.846 "traddr": "10.0.0.2", 00:20:38.846 "trsvcid": "4420" 00:20:38.846 }, 00:20:38.846 "peer_address": { 00:20:38.846 "trtype": "TCP", 00:20:38.846 "adrfam": "IPv4", 00:20:38.846 "traddr": "10.0.0.1", 00:20:38.846 "trsvcid": "43980" 00:20:38.846 }, 00:20:38.846 "auth": { 00:20:38.846 "state": "completed", 00:20:38.846 "digest": "sha384", 00:20:38.846 "dhgroup": "ffdhe6144" 00:20:38.846 } 00:20:38.846 } 00:20:38.846 ]' 00:20:38.846 03:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:38.846 03:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:38.846 03:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.104 03:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:39.104 03:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.104 03:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.104 03:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.104 03:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.362 03:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:20:39.362 03:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:20:39.929 03:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.929 03:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:39.929 03:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.929 03:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.929 03:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.929 03:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:39.929 03:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:39.929 03:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:39.929 03:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:40.187 03:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:40.187 03:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.187 03:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:40.187 03:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:40.187 03:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:40.187 03:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.187 03:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.187 03:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.187 03:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.187 03:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.187 03:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.187 03:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.187 03:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.446 00:20:40.446 03:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.446 03:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.446 03:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.704 03:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.705 03:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.705 03:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.705 03:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.705 03:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.705 03:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.705 { 00:20:40.705 "cntlid": 89, 00:20:40.705 "qid": 0, 00:20:40.705 "state": "enabled", 00:20:40.705 "thread": "nvmf_tgt_poll_group_000", 00:20:40.705 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:40.705 "listen_address": { 00:20:40.705 "trtype": "TCP", 00:20:40.705 "adrfam": "IPv4", 00:20:40.705 "traddr": "10.0.0.2", 00:20:40.705 "trsvcid": "4420" 00:20:40.705 }, 00:20:40.705 "peer_address": { 00:20:40.705 "trtype": "TCP", 00:20:40.705 "adrfam": "IPv4", 00:20:40.705 "traddr": "10.0.0.1", 00:20:40.705 "trsvcid": "45194" 00:20:40.705 }, 00:20:40.705 "auth": { 00:20:40.705 "state": "completed", 00:20:40.705 "digest": "sha384", 00:20:40.705 "dhgroup": "ffdhe8192" 00:20:40.705 } 00:20:40.705 } 00:20:40.705 ]' 00:20:40.705 03:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.705 03:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.705 03:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.963 03:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:40.963 03:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.963 03:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.963 03:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.963 03:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.963 03:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:20:40.963 03:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:20:41.531 03:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.531 03:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:41.531 03:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.531 03:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.531 03:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.531 03:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.531 03:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:41.531 03:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:41.790 03:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:41.790 03:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.790 03:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:41.790 03:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:41.790 03:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:41.790 03:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.790 03:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.790 03:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.790 03:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.790 03:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.790 03:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.790 03:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.790 03:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.358 00:20:42.358 03:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.358 03:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.358 03:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.617 03:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.617 03:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.617 03:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.617 03:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.617 03:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.617 03:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.617 { 00:20:42.617 "cntlid": 91, 00:20:42.617 "qid": 0, 00:20:42.617 "state": "enabled", 00:20:42.617 "thread": "nvmf_tgt_poll_group_000", 00:20:42.617 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:42.617 "listen_address": { 00:20:42.617 "trtype": "TCP", 00:20:42.617 "adrfam": "IPv4", 00:20:42.617 "traddr": "10.0.0.2", 00:20:42.617 "trsvcid": "4420" 00:20:42.617 }, 00:20:42.617 "peer_address": { 00:20:42.617 "trtype": "TCP", 00:20:42.617 "adrfam": "IPv4", 00:20:42.617 "traddr": "10.0.0.1", 00:20:42.617 "trsvcid": "45224" 00:20:42.617 }, 00:20:42.617 "auth": { 00:20:42.617 "state": "completed", 00:20:42.617 "digest": "sha384", 00:20:42.617 "dhgroup": "ffdhe8192" 00:20:42.617 } 00:20:42.617 } 00:20:42.617 ]' 00:20:42.617 03:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.617 03:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.617 03:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.617 03:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:42.617 03:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.617 03:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.617 03:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.617 03:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.876 03:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: --dhchap-ctrl-secret DHHC-1:02:ZmJlMWNmMjAzOTRmYTMwNDY1ZjA3ZTNmYjBhOTNlNGQzMDQyMGJkNzZmYjU2MDMwspixgw==: 00:20:42.876 03:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: --dhchap-ctrl-secret DHHC-1:02:ZmJlMWNmMjAzOTRmYTMwNDY1ZjA3ZTNmYjBhOTNlNGQzMDQyMGJkNzZmYjU2MDMwspixgw==: 00:20:43.444 03:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.444 03:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:43.444 03:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.444 03:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.444 03:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.444 03:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.444 03:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:43.444 03:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:43.703 03:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:43.703 03:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.703 03:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:43.703 03:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:43.703 03:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:43.703 03:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.703 03:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.703 03:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.703 03:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.703 03:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.703 03:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.703 03:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.703 03:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.271 00:20:44.271 03:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.271 03:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.271 03:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.271 03:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.271 03:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.271 03:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.271 03:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.271 03:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.271 03:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.271 { 00:20:44.271 "cntlid": 93, 00:20:44.271 "qid": 0, 00:20:44.271 "state": "enabled", 00:20:44.271 "thread": "nvmf_tgt_poll_group_000", 00:20:44.271 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:44.271 "listen_address": { 00:20:44.271 "trtype": "TCP", 00:20:44.271 "adrfam": "IPv4", 00:20:44.271 "traddr": "10.0.0.2", 00:20:44.271 "trsvcid": "4420" 00:20:44.271 }, 00:20:44.271 "peer_address": { 00:20:44.271 "trtype": "TCP", 00:20:44.271 "adrfam": "IPv4", 00:20:44.271 "traddr": "10.0.0.1", 00:20:44.271 "trsvcid": "45250" 00:20:44.271 }, 00:20:44.271 "auth": { 00:20:44.271 "state": "completed", 00:20:44.271 "digest": "sha384", 00:20:44.271 "dhgroup": "ffdhe8192" 00:20:44.271 } 00:20:44.271 } 00:20:44.271 ]' 00:20:44.271 03:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.271 03:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.271 03:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.530 03:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:44.530 03:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.530 03:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.530 03:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.530 03:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.530 03:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZmZDUzN2JjMjEwNDQ1ZTg1ZDI5ZTFhNTU5ODczZjEuFw/4: 00:20:44.789 03:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZmZDUzN2JjMjEwNDQ1ZTg1ZDI5ZTFhNTU5ODczZjEuFw/4: 00:20:45.357 03:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.357 03:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:45.357 03:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.357 03:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.357 03:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.357 03:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.357 03:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:45.357 03:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:45.357 03:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:45.357 03:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.357 03:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:45.357 03:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:45.357 03:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:45.357 03:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.357 03:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:45.357 03:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.357 03:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.357 03:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.357 03:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:45.357 03:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:45.357 03:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:45.925 00:20:45.925 03:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.925 03:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.925 03:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.184 03:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.184 03:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.184 03:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.184 03:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.184 03:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.184 03:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.184 { 00:20:46.184 "cntlid": 95, 00:20:46.184 "qid": 0, 00:20:46.184 "state": "enabled", 00:20:46.184 "thread": "nvmf_tgt_poll_group_000", 00:20:46.184 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:46.184 "listen_address": { 00:20:46.184 "trtype": "TCP", 00:20:46.184 "adrfam": "IPv4", 00:20:46.184 "traddr": "10.0.0.2", 00:20:46.184 "trsvcid": "4420" 00:20:46.184 }, 00:20:46.184 "peer_address": { 00:20:46.184 "trtype": "TCP", 00:20:46.184 "adrfam": "IPv4", 00:20:46.184 "traddr": "10.0.0.1", 00:20:46.184 "trsvcid": "45272" 00:20:46.184 }, 00:20:46.184 "auth": { 00:20:46.184 "state": "completed", 00:20:46.184 "digest": "sha384", 00:20:46.184 "dhgroup": "ffdhe8192" 00:20:46.184 } 00:20:46.184 } 00:20:46.184 ]' 00:20:46.184 03:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.184 03:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:46.184 03:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.184 03:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:46.184 03:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.184 03:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.184 03:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.184 03:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.443 03:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:20:46.443 03:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:20:47.012 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.012 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:47.012 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.012 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.012 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.012 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:47.012 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:47.012 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.012 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:47.012 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:47.271 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:47.271 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.271 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:47.271 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:47.271 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:47.271 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.271 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.271 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.271 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.271 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.271 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.271 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.271 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.529 00:20:47.529 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.529 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.529 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.788 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.788 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.788 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.788 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.788 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.788 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.788 { 00:20:47.788 "cntlid": 97, 00:20:47.788 "qid": 0, 00:20:47.788 "state": "enabled", 00:20:47.788 "thread": "nvmf_tgt_poll_group_000", 00:20:47.788 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:47.788 "listen_address": { 00:20:47.788 "trtype": "TCP", 00:20:47.788 "adrfam": "IPv4", 00:20:47.788 "traddr": "10.0.0.2", 00:20:47.788 "trsvcid": "4420" 00:20:47.788 }, 00:20:47.788 "peer_address": { 00:20:47.788 "trtype": "TCP", 00:20:47.788 "adrfam": "IPv4", 00:20:47.788 "traddr": "10.0.0.1", 00:20:47.788 "trsvcid": "45314" 00:20:47.788 }, 00:20:47.788 "auth": { 00:20:47.788 "state": "completed", 00:20:47.788 "digest": "sha512", 00:20:47.788 "dhgroup": "null" 00:20:47.788 } 00:20:47.788 } 00:20:47.788 ]' 00:20:47.788 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.788 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:47.788 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.788 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:47.789 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.789 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.789 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.789 03:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.047 03:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:20:48.047 03:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:20:48.615 03:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.615 03:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:48.615 03:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.615 03:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.615 03:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.615 03:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.615 03:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:48.615 03:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:48.874 03:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:20:48.874 03:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.874 03:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:48.874 03:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:48.874 03:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:48.874 03:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.874 03:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.874 03:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.874 03:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.874 03:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.874 03:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.874 03:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.874 03:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.133 00:20:49.133 03:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.133 03:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.133 03:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.133 03:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.133 03:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.133 03:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.133 03:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.133 03:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.133 03:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.133 { 00:20:49.133 "cntlid": 99, 00:20:49.133 "qid": 0, 00:20:49.133 "state": "enabled", 00:20:49.133 "thread": "nvmf_tgt_poll_group_000", 00:20:49.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:49.133 "listen_address": { 00:20:49.133 "trtype": "TCP", 00:20:49.133 "adrfam": "IPv4", 00:20:49.133 "traddr": "10.0.0.2", 00:20:49.133 "trsvcid": "4420" 00:20:49.133 }, 00:20:49.133 "peer_address": { 00:20:49.133 "trtype": "TCP", 00:20:49.134 "adrfam": "IPv4", 00:20:49.134 "traddr": "10.0.0.1", 00:20:49.134 "trsvcid": "45344" 00:20:49.134 }, 00:20:49.134 "auth": { 00:20:49.134 "state": "completed", 00:20:49.134 "digest": "sha512", 00:20:49.134 "dhgroup": "null" 00:20:49.134 } 00:20:49.134 } 00:20:49.134 ]' 00:20:49.134 03:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.392 03:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:49.392 03:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.392 03:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:49.393 03:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.393 03:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.393 03:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.393 03:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.651 03:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: --dhchap-ctrl-secret DHHC-1:02:ZmJlMWNmMjAzOTRmYTMwNDY1ZjA3ZTNmYjBhOTNlNGQzMDQyMGJkNzZmYjU2MDMwspixgw==: 00:20:49.651 03:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: --dhchap-ctrl-secret DHHC-1:02:ZmJlMWNmMjAzOTRmYTMwNDY1ZjA3ZTNmYjBhOTNlNGQzMDQyMGJkNzZmYjU2MDMwspixgw==: 00:20:50.219 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.219 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:50.219 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.219 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.219 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.219 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.219 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:50.219 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:50.219 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:20:50.219 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.219 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:50.219 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:50.219 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:50.219 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.219 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.219 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.219 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.219 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.219 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.219 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.219 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.478 00:20:50.478 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.478 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.478 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.737 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.737 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.737 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.737 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.737 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.737 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.737 { 00:20:50.737 "cntlid": 101, 00:20:50.737 "qid": 0, 00:20:50.737 "state": "enabled", 00:20:50.737 "thread": "nvmf_tgt_poll_group_000", 00:20:50.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:50.737 "listen_address": { 00:20:50.737 "trtype": "TCP", 00:20:50.737 "adrfam": "IPv4", 00:20:50.737 "traddr": "10.0.0.2", 00:20:50.737 "trsvcid": "4420" 00:20:50.737 }, 00:20:50.737 "peer_address": { 00:20:50.737 "trtype": "TCP", 00:20:50.737 "adrfam": "IPv4", 00:20:50.737 "traddr": "10.0.0.1", 00:20:50.737 "trsvcid": "38876" 00:20:50.737 }, 00:20:50.737 "auth": { 00:20:50.737 "state": "completed", 00:20:50.737 "digest": "sha512", 00:20:50.737 "dhgroup": "null" 00:20:50.737 } 00:20:50.737 } 00:20:50.737 ]' 00:20:50.737 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.737 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:50.737 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.737 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:50.737 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.995 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.995 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.995 03:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.995 03:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZmZDUzN2JjMjEwNDQ1ZTg1ZDI5ZTFhNTU5ODczZjEuFw/4: 00:20:50.995 03:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZmZDUzN2JjMjEwNDQ1ZTg1ZDI5ZTFhNTU5ODczZjEuFw/4: 00:20:51.562 03:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.562 03:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:51.562 03:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.562 03:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.562 03:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.562 03:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.562 03:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:51.562 03:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:51.821 03:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:20:51.821 03:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.821 03:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:51.821 03:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:51.821 03:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:51.821 03:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.821 03:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:51.821 03:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.821 03:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.821 03:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.821 03:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:51.822 03:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:51.822 03:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:52.080 00:20:52.080 03:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.080 03:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.080 03:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.339 03:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.339 03:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.339 03:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.339 03:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.339 03:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.339 03:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.339 { 00:20:52.339 "cntlid": 103, 00:20:52.339 "qid": 0, 00:20:52.339 "state": "enabled", 00:20:52.339 "thread": "nvmf_tgt_poll_group_000", 00:20:52.339 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:52.339 "listen_address": { 00:20:52.339 "trtype": "TCP", 00:20:52.339 "adrfam": "IPv4", 00:20:52.339 "traddr": "10.0.0.2", 00:20:52.339 "trsvcid": "4420" 00:20:52.339 }, 00:20:52.339 "peer_address": { 00:20:52.339 "trtype": "TCP", 00:20:52.339 "adrfam": "IPv4", 00:20:52.339 "traddr": "10.0.0.1", 00:20:52.339 "trsvcid": "38900" 00:20:52.339 }, 00:20:52.339 "auth": { 00:20:52.339 "state": "completed", 00:20:52.339 "digest": "sha512", 00:20:52.339 "dhgroup": "null" 00:20:52.339 } 00:20:52.339 } 00:20:52.339 ]' 00:20:52.339 03:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.339 03:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:52.339 03:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.339 03:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:52.339 03:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.339 03:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.339 03:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.339 03:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.598 03:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:20:52.598 03:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:20:53.164 03:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.164 03:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:53.164 03:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.165 03:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.165 03:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.165 03:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:53.165 03:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.165 03:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:53.165 03:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:53.423 03:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:20:53.423 03:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.423 03:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:53.423 03:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:53.423 03:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:53.423 03:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.423 03:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.423 03:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.423 03:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.424 03:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.424 03:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.424 03:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.424 03:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.682 00:20:53.682 03:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:53.682 03:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:53.682 03:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.941 03:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.941 03:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.941 03:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.941 03:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.941 03:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.941 03:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.941 { 00:20:53.941 "cntlid": 105, 00:20:53.941 "qid": 0, 00:20:53.941 "state": "enabled", 00:20:53.941 "thread": "nvmf_tgt_poll_group_000", 00:20:53.941 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:53.941 "listen_address": { 00:20:53.942 "trtype": "TCP", 00:20:53.942 "adrfam": "IPv4", 00:20:53.942 "traddr": "10.0.0.2", 00:20:53.942 "trsvcid": "4420" 00:20:53.942 }, 00:20:53.942 "peer_address": { 00:20:53.942 "trtype": "TCP", 00:20:53.942 "adrfam": "IPv4", 00:20:53.942 "traddr": "10.0.0.1", 00:20:53.942 "trsvcid": "38942" 00:20:53.942 }, 00:20:53.942 "auth": { 00:20:53.942 "state": "completed", 00:20:53.942 "digest": "sha512", 00:20:53.942 "dhgroup": "ffdhe2048" 00:20:53.942 } 00:20:53.942 } 00:20:53.942 ]' 00:20:53.942 03:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.942 03:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:53.942 03:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.942 03:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:53.942 03:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.942 03:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.942 03:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.942 03:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.200 03:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:20:54.200 03:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:20:54.767 03:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.767 03:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:54.767 03:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.767 03:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.767 03:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.767 03:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.767 03:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:54.767 03:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:55.026 03:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:20:55.026 03:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.026 03:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:55.026 03:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:55.026 03:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:55.026 03:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.026 03:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.026 03:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.026 03:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.026 03:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.026 03:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.026 03:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.026 03:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.285 00:20:55.285 03:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.285 03:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.285 03:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.285 03:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.285 03:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.285 03:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.285 03:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.285 03:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.285 03:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.285 { 00:20:55.285 "cntlid": 107, 00:20:55.285 "qid": 0, 00:20:55.285 "state": "enabled", 00:20:55.285 "thread": "nvmf_tgt_poll_group_000", 00:20:55.285 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:55.285 "listen_address": { 00:20:55.285 "trtype": "TCP", 00:20:55.285 "adrfam": "IPv4", 00:20:55.285 "traddr": "10.0.0.2", 00:20:55.285 "trsvcid": "4420" 00:20:55.285 }, 00:20:55.285 "peer_address": { 00:20:55.285 "trtype": "TCP", 00:20:55.285 "adrfam": "IPv4", 00:20:55.285 "traddr": "10.0.0.1", 00:20:55.285 "trsvcid": "38966" 00:20:55.285 }, 00:20:55.285 "auth": { 00:20:55.285 "state": "completed", 00:20:55.285 "digest": "sha512", 00:20:55.285 "dhgroup": "ffdhe2048" 00:20:55.285 } 00:20:55.285 } 00:20:55.285 ]' 00:20:55.285 03:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.544 03:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:55.544 03:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.544 03:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:55.544 03:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.544 03:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.544 03:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.544 03:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.803 03:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: --dhchap-ctrl-secret DHHC-1:02:ZmJlMWNmMjAzOTRmYTMwNDY1ZjA3ZTNmYjBhOTNlNGQzMDQyMGJkNzZmYjU2MDMwspixgw==: 00:20:55.803 03:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: --dhchap-ctrl-secret DHHC-1:02:ZmJlMWNmMjAzOTRmYTMwNDY1ZjA3ZTNmYjBhOTNlNGQzMDQyMGJkNzZmYjU2MDMwspixgw==: 00:20:56.371 03:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.371 03:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:56.371 03:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.371 03:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.371 03:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.371 03:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.371 03:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:56.371 03:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:56.371 03:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:20:56.371 03:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:56.371 03:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:56.371 03:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:56.371 03:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:56.371 03:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.371 03:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.371 03:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.371 03:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.371 03:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.371 03:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.371 03:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.371 03:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.630 00:20:56.630 03:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.630 03:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.630 03:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.889 03:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.889 03:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.889 03:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.889 03:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.889 03:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.889 03:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.889 { 00:20:56.889 "cntlid": 109, 00:20:56.889 "qid": 0, 00:20:56.889 "state": "enabled", 00:20:56.889 "thread": "nvmf_tgt_poll_group_000", 00:20:56.889 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:56.889 "listen_address": { 00:20:56.889 "trtype": "TCP", 00:20:56.889 "adrfam": "IPv4", 00:20:56.889 "traddr": "10.0.0.2", 00:20:56.889 "trsvcid": "4420" 00:20:56.889 }, 00:20:56.889 "peer_address": { 00:20:56.889 "trtype": "TCP", 00:20:56.889 "adrfam": "IPv4", 00:20:56.889 "traddr": "10.0.0.1", 00:20:56.889 "trsvcid": "38972" 00:20:56.889 }, 00:20:56.889 "auth": { 00:20:56.889 "state": "completed", 00:20:56.889 "digest": "sha512", 00:20:56.889 "dhgroup": "ffdhe2048" 00:20:56.889 } 00:20:56.889 } 00:20:56.889 ]' 00:20:56.889 03:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.889 03:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:56.889 03:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.148 03:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:57.148 03:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.148 03:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.148 03:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.148 03:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.148 03:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZmZDUzN2JjMjEwNDQ1ZTg1ZDI5ZTFhNTU5ODczZjEuFw/4: 00:20:57.148 03:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZmZDUzN2JjMjEwNDQ1ZTg1ZDI5ZTFhNTU5ODczZjEuFw/4: 00:20:57.728 03:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.017 03:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:58.017 03:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.017 03:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.017 03:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.017 03:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.017 03:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:58.017 03:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:58.017 03:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:20:58.017 03:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.017 03:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:58.017 03:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:58.017 03:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:58.017 03:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.017 03:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:58.017 03:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.017 03:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.017 03:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.018 03:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:58.018 03:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.018 03:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.341 00:20:58.341 03:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.341 03:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.341 03:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.601 03:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.601 03:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.601 03:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.601 03:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.601 03:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.601 03:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.601 { 00:20:58.601 "cntlid": 111, 00:20:58.601 "qid": 0, 00:20:58.601 "state": "enabled", 00:20:58.601 "thread": "nvmf_tgt_poll_group_000", 00:20:58.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:58.601 "listen_address": { 00:20:58.601 "trtype": "TCP", 00:20:58.601 "adrfam": "IPv4", 00:20:58.601 "traddr": "10.0.0.2", 00:20:58.601 "trsvcid": "4420" 00:20:58.601 }, 00:20:58.601 "peer_address": { 00:20:58.601 "trtype": "TCP", 00:20:58.601 "adrfam": "IPv4", 00:20:58.601 "traddr": "10.0.0.1", 00:20:58.601 "trsvcid": "39010" 00:20:58.601 }, 00:20:58.601 "auth": { 00:20:58.601 "state": "completed", 00:20:58.601 "digest": "sha512", 00:20:58.601 "dhgroup": "ffdhe2048" 00:20:58.601 } 00:20:58.601 } 00:20:58.601 ]' 00:20:58.601 03:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.601 03:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:58.601 03:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.601 03:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:58.601 03:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.601 03:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.601 03:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.602 03:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.860 03:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:20:58.860 03:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:20:59.427 03:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.427 03:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:59.427 03:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.427 03:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.427 03:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.427 03:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:59.427 03:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.427 03:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:59.427 03:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:59.686 03:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:20:59.686 03:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.686 03:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:59.686 03:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:59.686 03:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:59.686 03:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.686 03:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.686 03:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.686 03:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.686 03:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.686 03:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.686 03:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.686 03:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.944 00:20:59.944 03:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.944 03:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.944 03:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.203 03:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.203 03:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.203 03:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.203 03:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.203 03:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.203 03:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.203 { 00:21:00.203 "cntlid": 113, 00:21:00.203 "qid": 0, 00:21:00.203 "state": "enabled", 00:21:00.203 "thread": "nvmf_tgt_poll_group_000", 00:21:00.203 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:00.203 "listen_address": { 00:21:00.203 "trtype": "TCP", 00:21:00.203 "adrfam": "IPv4", 00:21:00.203 "traddr": "10.0.0.2", 00:21:00.203 "trsvcid": "4420" 00:21:00.203 }, 00:21:00.203 "peer_address": { 00:21:00.203 "trtype": "TCP", 00:21:00.203 "adrfam": "IPv4", 00:21:00.203 "traddr": "10.0.0.1", 00:21:00.203 "trsvcid": "60454" 00:21:00.203 }, 00:21:00.203 "auth": { 00:21:00.203 "state": "completed", 00:21:00.203 "digest": "sha512", 00:21:00.203 "dhgroup": "ffdhe3072" 00:21:00.203 } 00:21:00.203 } 00:21:00.203 ]' 00:21:00.203 03:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.203 03:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.204 03:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.204 03:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:00.204 03:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.204 03:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.204 03:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.204 03:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.463 03:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:21:00.463 03:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:21:01.029 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.029 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:01.029 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.029 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.029 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.029 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.029 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:01.029 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:01.288 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:01.288 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.288 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:01.288 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:01.288 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:01.288 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.288 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.288 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.288 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.288 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.288 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.288 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.288 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.548 00:21:01.548 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.548 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.548 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.806 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.806 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.806 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.806 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.806 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.806 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.806 { 00:21:01.806 "cntlid": 115, 00:21:01.806 "qid": 0, 00:21:01.806 "state": "enabled", 00:21:01.806 "thread": "nvmf_tgt_poll_group_000", 00:21:01.806 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:01.806 "listen_address": { 00:21:01.806 "trtype": "TCP", 00:21:01.806 "adrfam": "IPv4", 00:21:01.806 "traddr": "10.0.0.2", 00:21:01.806 "trsvcid": "4420" 00:21:01.806 }, 00:21:01.806 "peer_address": { 00:21:01.806 "trtype": "TCP", 00:21:01.806 "adrfam": "IPv4", 00:21:01.806 "traddr": "10.0.0.1", 00:21:01.806 "trsvcid": "60468" 00:21:01.806 }, 00:21:01.806 "auth": { 00:21:01.806 "state": "completed", 00:21:01.806 "digest": "sha512", 00:21:01.806 "dhgroup": "ffdhe3072" 00:21:01.806 } 00:21:01.806 } 00:21:01.806 ]' 00:21:01.806 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.806 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:01.806 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.806 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:01.806 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.806 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.806 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.806 03:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.064 03:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: --dhchap-ctrl-secret DHHC-1:02:ZmJlMWNmMjAzOTRmYTMwNDY1ZjA3ZTNmYjBhOTNlNGQzMDQyMGJkNzZmYjU2MDMwspixgw==: 00:21:02.065 03:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: --dhchap-ctrl-secret DHHC-1:02:ZmJlMWNmMjAzOTRmYTMwNDY1ZjA3ZTNmYjBhOTNlNGQzMDQyMGJkNzZmYjU2MDMwspixgw==: 00:21:02.632 03:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.632 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.632 03:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:02.632 03:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.632 03:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.632 03:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.632 03:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.632 03:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:02.632 03:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:02.890 03:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:02.890 03:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.890 03:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:02.890 03:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:02.890 03:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:02.890 03:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.890 03:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.890 03:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.890 03:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.890 03:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.890 03:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.890 03:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.891 03:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.149 00:21:03.149 03:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.149 03:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.149 03:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.149 03:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.149 03:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.149 03:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.149 03:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.149 03:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.407 03:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.407 { 00:21:03.407 "cntlid": 117, 00:21:03.407 "qid": 0, 00:21:03.407 "state": "enabled", 00:21:03.407 "thread": "nvmf_tgt_poll_group_000", 00:21:03.407 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:03.407 "listen_address": { 00:21:03.407 "trtype": "TCP", 00:21:03.407 "adrfam": "IPv4", 00:21:03.407 "traddr": "10.0.0.2", 00:21:03.407 "trsvcid": "4420" 00:21:03.407 }, 00:21:03.407 "peer_address": { 00:21:03.407 "trtype": "TCP", 00:21:03.407 "adrfam": "IPv4", 00:21:03.407 "traddr": "10.0.0.1", 00:21:03.407 "trsvcid": "60502" 00:21:03.407 }, 00:21:03.407 "auth": { 00:21:03.407 "state": "completed", 00:21:03.407 "digest": "sha512", 00:21:03.407 "dhgroup": "ffdhe3072" 00:21:03.407 } 00:21:03.407 } 00:21:03.407 ]' 00:21:03.407 03:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.407 03:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:03.407 03:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.408 03:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:03.408 03:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.408 03:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.408 03:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.408 03:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.666 03:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZmZDUzN2JjMjEwNDQ1ZTg1ZDI5ZTFhNTU5ODczZjEuFw/4: 00:21:03.666 03:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZmZDUzN2JjMjEwNDQ1ZTg1ZDI5ZTFhNTU5ODczZjEuFw/4: 00:21:04.244 03:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.244 03:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:04.244 03:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.244 03:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.244 03:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.244 03:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.244 03:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:04.244 03:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:04.244 03:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:04.244 03:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.244 03:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:04.244 03:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:04.244 03:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:04.244 03:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.244 03:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:04.244 03:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.244 03:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.244 03:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.244 03:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:04.244 03:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:04.244 03:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:04.502 00:21:04.502 03:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.502 03:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.502 03:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.760 03:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.760 03:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.760 03:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.761 03:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.761 03:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.761 03:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.761 { 00:21:04.761 "cntlid": 119, 00:21:04.761 "qid": 0, 00:21:04.761 "state": "enabled", 00:21:04.761 "thread": "nvmf_tgt_poll_group_000", 00:21:04.761 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:04.761 "listen_address": { 00:21:04.761 "trtype": "TCP", 00:21:04.761 "adrfam": "IPv4", 00:21:04.761 "traddr": "10.0.0.2", 00:21:04.761 "trsvcid": "4420" 00:21:04.761 }, 00:21:04.761 "peer_address": { 00:21:04.761 "trtype": "TCP", 00:21:04.761 "adrfam": "IPv4", 00:21:04.761 "traddr": "10.0.0.1", 00:21:04.761 "trsvcid": "60532" 00:21:04.761 }, 00:21:04.761 "auth": { 00:21:04.761 "state": "completed", 00:21:04.761 "digest": "sha512", 00:21:04.761 "dhgroup": "ffdhe3072" 00:21:04.761 } 00:21:04.761 } 00:21:04.761 ]' 00:21:04.761 03:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.761 03:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:04.761 03:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.019 03:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:05.019 03:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.019 03:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.019 03:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.019 03:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.019 03:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:21:05.019 03:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:21:05.586 03:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.586 03:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:05.586 03:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.586 03:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.586 03:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.586 03:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:05.586 03:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.586 03:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:05.844 03:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:05.844 03:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:05.844 03:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.844 03:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:05.844 03:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:05.844 03:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:05.844 03:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.844 03:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.844 03:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.844 03:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.844 03:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.844 03:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.844 03:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.844 03:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.102 00:21:06.102 03:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.102 03:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.102 03:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.361 03:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.361 03:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.361 03:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.361 03:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.361 03:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.361 03:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.361 { 00:21:06.361 "cntlid": 121, 00:21:06.361 "qid": 0, 00:21:06.361 "state": "enabled", 00:21:06.361 "thread": "nvmf_tgt_poll_group_000", 00:21:06.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:06.361 "listen_address": { 00:21:06.361 "trtype": "TCP", 00:21:06.361 "adrfam": "IPv4", 00:21:06.361 "traddr": "10.0.0.2", 00:21:06.361 "trsvcid": "4420" 00:21:06.361 }, 00:21:06.361 "peer_address": { 00:21:06.361 "trtype": "TCP", 00:21:06.361 "adrfam": "IPv4", 00:21:06.361 "traddr": "10.0.0.1", 00:21:06.361 "trsvcid": "60570" 00:21:06.361 }, 00:21:06.361 "auth": { 00:21:06.361 "state": "completed", 00:21:06.361 "digest": "sha512", 00:21:06.361 "dhgroup": "ffdhe4096" 00:21:06.361 } 00:21:06.361 } 00:21:06.361 ]' 00:21:06.361 03:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.361 03:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.361 03:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.619 03:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:06.619 03:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.619 03:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.619 03:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.619 03:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.619 03:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:21:06.619 03:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:21:07.186 03:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.444 03:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:07.444 03:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.444 03:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.444 03:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.444 03:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.445 03:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:07.445 03:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:07.445 03:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:07.445 03:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.445 03:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:07.445 03:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:07.445 03:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:07.445 03:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.445 03:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.445 03:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.445 03:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.445 03:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.445 03:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.445 03:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.445 03:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.703 00:21:07.703 03:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.703 03:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.703 03:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.962 03:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.962 03:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.962 03:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.962 03:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.962 03:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.962 03:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.962 { 00:21:07.962 "cntlid": 123, 00:21:07.962 "qid": 0, 00:21:07.962 "state": "enabled", 00:21:07.962 "thread": "nvmf_tgt_poll_group_000", 00:21:07.962 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:07.962 "listen_address": { 00:21:07.962 "trtype": "TCP", 00:21:07.962 "adrfam": "IPv4", 00:21:07.962 "traddr": "10.0.0.2", 00:21:07.962 "trsvcid": "4420" 00:21:07.962 }, 00:21:07.962 "peer_address": { 00:21:07.962 "trtype": "TCP", 00:21:07.962 "adrfam": "IPv4", 00:21:07.962 "traddr": "10.0.0.1", 00:21:07.962 "trsvcid": "60588" 00:21:07.962 }, 00:21:07.962 "auth": { 00:21:07.962 "state": "completed", 00:21:07.962 "digest": "sha512", 00:21:07.962 "dhgroup": "ffdhe4096" 00:21:07.962 } 00:21:07.962 } 00:21:07.962 ]' 00:21:07.962 03:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.962 03:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.962 03:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.220 03:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:08.220 03:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.220 03:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.220 03:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.220 03:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.220 03:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: --dhchap-ctrl-secret DHHC-1:02:ZmJlMWNmMjAzOTRmYTMwNDY1ZjA3ZTNmYjBhOTNlNGQzMDQyMGJkNzZmYjU2MDMwspixgw==: 00:21:08.220 03:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: --dhchap-ctrl-secret DHHC-1:02:ZmJlMWNmMjAzOTRmYTMwNDY1ZjA3ZTNmYjBhOTNlNGQzMDQyMGJkNzZmYjU2MDMwspixgw==: 00:21:08.787 03:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.045 03:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:09.045 03:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.045 03:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.045 03:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.045 03:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.045 03:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:09.045 03:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:09.045 03:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:09.046 03:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.046 03:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:09.046 03:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:09.046 03:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:09.046 03:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.046 03:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.046 03:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.046 03:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.046 03:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.046 03:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.046 03:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.046 03:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.304 00:21:09.562 03:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.562 03:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.562 03:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.562 03:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.562 03:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.562 03:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.562 03:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.562 03:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.562 03:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.562 { 00:21:09.562 "cntlid": 125, 00:21:09.562 "qid": 0, 00:21:09.562 "state": "enabled", 00:21:09.562 "thread": "nvmf_tgt_poll_group_000", 00:21:09.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:09.562 "listen_address": { 00:21:09.562 "trtype": "TCP", 00:21:09.563 "adrfam": "IPv4", 00:21:09.563 "traddr": "10.0.0.2", 00:21:09.563 "trsvcid": "4420" 00:21:09.563 }, 00:21:09.563 "peer_address": { 00:21:09.563 "trtype": "TCP", 00:21:09.563 "adrfam": "IPv4", 00:21:09.563 "traddr": "10.0.0.1", 00:21:09.563 "trsvcid": "35902" 00:21:09.563 }, 00:21:09.563 "auth": { 00:21:09.563 "state": "completed", 00:21:09.563 "digest": "sha512", 00:21:09.563 "dhgroup": "ffdhe4096" 00:21:09.563 } 00:21:09.563 } 00:21:09.563 ]' 00:21:09.563 03:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.563 03:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.563 03:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.821 03:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:09.821 03:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.821 03:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.821 03:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.821 03:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.821 03:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZmZDUzN2JjMjEwNDQ1ZTg1ZDI5ZTFhNTU5ODczZjEuFw/4: 00:21:09.821 03:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZmZDUzN2JjMjEwNDQ1ZTg1ZDI5ZTFhNTU5ODczZjEuFw/4: 00:21:10.389 03:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.389 03:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:10.389 03:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.389 03:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.389 03:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.389 03:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.389 03:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:10.389 03:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:10.648 03:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:10.648 03:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.648 03:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:10.648 03:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:10.648 03:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:10.648 03:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.648 03:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:10.648 03:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.648 03:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.648 03:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.648 03:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:10.648 03:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:10.648 03:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:10.907 00:21:10.907 03:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.907 03:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.907 03:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.166 03:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.166 03:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.166 03:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.166 03:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.166 03:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.166 03:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.166 { 00:21:11.166 "cntlid": 127, 00:21:11.166 "qid": 0, 00:21:11.166 "state": "enabled", 00:21:11.166 "thread": "nvmf_tgt_poll_group_000", 00:21:11.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:11.166 "listen_address": { 00:21:11.166 "trtype": "TCP", 00:21:11.166 "adrfam": "IPv4", 00:21:11.166 "traddr": "10.0.0.2", 00:21:11.166 "trsvcid": "4420" 00:21:11.166 }, 00:21:11.166 "peer_address": { 00:21:11.166 "trtype": "TCP", 00:21:11.166 "adrfam": "IPv4", 00:21:11.166 "traddr": "10.0.0.1", 00:21:11.166 "trsvcid": "35934" 00:21:11.166 }, 00:21:11.166 "auth": { 00:21:11.166 "state": "completed", 00:21:11.166 "digest": "sha512", 00:21:11.166 "dhgroup": "ffdhe4096" 00:21:11.166 } 00:21:11.166 } 00:21:11.166 ]' 00:21:11.166 03:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.166 03:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:11.166 03:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.166 03:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:11.166 03:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.425 03:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.425 03:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.425 03:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.425 03:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:21:11.425 03:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:21:11.993 03:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.993 03:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:11.993 03:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.993 03:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.993 03:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.993 03:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:11.993 03:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.993 03:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:11.993 03:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:12.252 03:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:12.252 03:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.252 03:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:12.252 03:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:12.252 03:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:12.252 03:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.252 03:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.252 03:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.252 03:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.252 03:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.252 03:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.252 03:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.252 03:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.820 00:21:12.820 03:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.820 03:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.820 03:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.820 03:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.820 03:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.820 03:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.820 03:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.820 03:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.820 03:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.820 { 00:21:12.820 "cntlid": 129, 00:21:12.820 "qid": 0, 00:21:12.820 "state": "enabled", 00:21:12.820 "thread": "nvmf_tgt_poll_group_000", 00:21:12.820 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:12.820 "listen_address": { 00:21:12.820 "trtype": "TCP", 00:21:12.820 "adrfam": "IPv4", 00:21:12.820 "traddr": "10.0.0.2", 00:21:12.820 "trsvcid": "4420" 00:21:12.820 }, 00:21:12.820 "peer_address": { 00:21:12.820 "trtype": "TCP", 00:21:12.820 "adrfam": "IPv4", 00:21:12.820 "traddr": "10.0.0.1", 00:21:12.820 "trsvcid": "35966" 00:21:12.820 }, 00:21:12.820 "auth": { 00:21:12.820 "state": "completed", 00:21:12.820 "digest": "sha512", 00:21:12.820 "dhgroup": "ffdhe6144" 00:21:12.820 } 00:21:12.820 } 00:21:12.820 ]' 00:21:12.820 03:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.820 03:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.820 03:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.079 03:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:13.079 03:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.079 03:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.079 03:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.079 03:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.079 03:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:21:13.079 03:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:21:13.647 03:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.647 03:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:13.647 03:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.647 03:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.647 03:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.647 03:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.647 03:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:13.647 03:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:13.906 03:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:13.906 03:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.906 03:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:13.906 03:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:13.906 03:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:13.906 03:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.906 03:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.906 03:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.906 03:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.906 03:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.906 03:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.906 03:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.906 03:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.165 00:21:14.423 03:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.423 03:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.423 03:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.423 03:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.423 03:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.423 03:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.423 03:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.423 03:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.423 03:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.423 { 00:21:14.423 "cntlid": 131, 00:21:14.423 "qid": 0, 00:21:14.423 "state": "enabled", 00:21:14.423 "thread": "nvmf_tgt_poll_group_000", 00:21:14.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:14.423 "listen_address": { 00:21:14.423 "trtype": "TCP", 00:21:14.423 "adrfam": "IPv4", 00:21:14.423 "traddr": "10.0.0.2", 00:21:14.423 "trsvcid": "4420" 00:21:14.423 }, 00:21:14.423 "peer_address": { 00:21:14.423 "trtype": "TCP", 00:21:14.423 "adrfam": "IPv4", 00:21:14.423 "traddr": "10.0.0.1", 00:21:14.423 "trsvcid": "35998" 00:21:14.423 }, 00:21:14.423 "auth": { 00:21:14.423 "state": "completed", 00:21:14.423 "digest": "sha512", 00:21:14.423 "dhgroup": "ffdhe6144" 00:21:14.423 } 00:21:14.423 } 00:21:14.423 ]' 00:21:14.423 03:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.423 03:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:14.423 03:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.683 03:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:14.683 03:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.683 03:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.683 03:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.683 03:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.944 03:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: --dhchap-ctrl-secret DHHC-1:02:ZmJlMWNmMjAzOTRmYTMwNDY1ZjA3ZTNmYjBhOTNlNGQzMDQyMGJkNzZmYjU2MDMwspixgw==: 00:21:14.944 03:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: --dhchap-ctrl-secret DHHC-1:02:ZmJlMWNmMjAzOTRmYTMwNDY1ZjA3ZTNmYjBhOTNlNGQzMDQyMGJkNzZmYjU2MDMwspixgw==: 00:21:15.513 03:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.513 03:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:15.513 03:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.513 03:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.513 03:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.513 03:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.513 03:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:15.513 03:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:15.513 03:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:15.513 03:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.513 03:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:15.513 03:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:15.513 03:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:15.513 03:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.513 03:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.513 03:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.513 03:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.513 03:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.513 03:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.513 03:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.513 03:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.082 00:21:16.082 03:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.082 03:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.082 03:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.082 03:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.082 03:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.082 03:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.082 03:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.082 03:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.082 03:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.082 { 00:21:16.082 "cntlid": 133, 00:21:16.082 "qid": 0, 00:21:16.082 "state": "enabled", 00:21:16.082 "thread": "nvmf_tgt_poll_group_000", 00:21:16.082 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:16.082 "listen_address": { 00:21:16.082 "trtype": "TCP", 00:21:16.082 "adrfam": "IPv4", 00:21:16.082 "traddr": "10.0.0.2", 00:21:16.082 "trsvcid": "4420" 00:21:16.082 }, 00:21:16.082 "peer_address": { 00:21:16.082 "trtype": "TCP", 00:21:16.082 "adrfam": "IPv4", 00:21:16.082 "traddr": "10.0.0.1", 00:21:16.082 "trsvcid": "36024" 00:21:16.082 }, 00:21:16.082 "auth": { 00:21:16.082 "state": "completed", 00:21:16.082 "digest": "sha512", 00:21:16.082 "dhgroup": "ffdhe6144" 00:21:16.082 } 00:21:16.082 } 00:21:16.082 ]' 00:21:16.082 03:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.341 03:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.341 03:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.341 03:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:16.341 03:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.341 03:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.341 03:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.341 03:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.601 03:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZmZDUzN2JjMjEwNDQ1ZTg1ZDI5ZTFhNTU5ODczZjEuFw/4: 00:21:16.601 03:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZmZDUzN2JjMjEwNDQ1ZTg1ZDI5ZTFhNTU5ODczZjEuFw/4: 00:21:17.169 03:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.169 03:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:17.169 03:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.169 03:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.169 03:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.169 03:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.169 03:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:17.169 03:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:17.169 03:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:17.169 03:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.169 03:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:17.169 03:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:17.169 03:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:17.169 03:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.169 03:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:17.169 03:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.169 03:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.169 03:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.169 03:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:17.169 03:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:17.169 03:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:17.738 00:21:17.738 03:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.738 03:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.738 03:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.738 03:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.738 03:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.738 03:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.738 03:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.738 03:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.738 03:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.738 { 00:21:17.738 "cntlid": 135, 00:21:17.738 "qid": 0, 00:21:17.738 "state": "enabled", 00:21:17.738 "thread": "nvmf_tgt_poll_group_000", 00:21:17.738 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:17.738 "listen_address": { 00:21:17.738 "trtype": "TCP", 00:21:17.738 "adrfam": "IPv4", 00:21:17.738 "traddr": "10.0.0.2", 00:21:17.738 "trsvcid": "4420" 00:21:17.738 }, 00:21:17.738 "peer_address": { 00:21:17.738 "trtype": "TCP", 00:21:17.738 "adrfam": "IPv4", 00:21:17.738 "traddr": "10.0.0.1", 00:21:17.738 "trsvcid": "36040" 00:21:17.738 }, 00:21:17.738 "auth": { 00:21:17.738 "state": "completed", 00:21:17.738 "digest": "sha512", 00:21:17.738 "dhgroup": "ffdhe6144" 00:21:17.738 } 00:21:17.738 } 00:21:17.738 ]' 00:21:17.738 03:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.997 03:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:17.997 03:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.997 03:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:17.997 03:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.997 03:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.997 03:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.997 03:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.256 03:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:21:18.256 03:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:21:18.826 03:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.826 03:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:18.826 03:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.827 03:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.827 03:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.827 03:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:18.827 03:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.827 03:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:18.827 03:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:18.827 03:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:18.827 03:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.827 03:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:18.827 03:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:18.827 03:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:18.827 03:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.827 03:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.827 03:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.827 03:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.827 03:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.827 03:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.827 03:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.827 03:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.395 00:21:19.395 03:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.395 03:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.395 03:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.654 03:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.654 03:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.654 03:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.654 03:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.654 03:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.654 03:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.654 { 00:21:19.654 "cntlid": 137, 00:21:19.654 "qid": 0, 00:21:19.654 "state": "enabled", 00:21:19.654 "thread": "nvmf_tgt_poll_group_000", 00:21:19.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:19.654 "listen_address": { 00:21:19.654 "trtype": "TCP", 00:21:19.654 "adrfam": "IPv4", 00:21:19.654 "traddr": "10.0.0.2", 00:21:19.654 "trsvcid": "4420" 00:21:19.654 }, 00:21:19.654 "peer_address": { 00:21:19.654 "trtype": "TCP", 00:21:19.654 "adrfam": "IPv4", 00:21:19.654 "traddr": "10.0.0.1", 00:21:19.654 "trsvcid": "36066" 00:21:19.654 }, 00:21:19.654 "auth": { 00:21:19.654 "state": "completed", 00:21:19.654 "digest": "sha512", 00:21:19.654 "dhgroup": "ffdhe8192" 00:21:19.654 } 00:21:19.654 } 00:21:19.654 ]' 00:21:19.654 03:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.654 03:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.654 03:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.654 03:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:19.654 03:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.654 03:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.654 03:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.654 03:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.913 03:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:21:19.913 03:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:21:20.481 03:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.481 03:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:20.481 03:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.481 03:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.481 03:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.481 03:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.481 03:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:20.481 03:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:20.740 03:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:20.740 03:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.740 03:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:20.740 03:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:20.740 03:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:20.740 03:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.740 03:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.740 03:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.740 03:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.740 03:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.740 03:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.740 03:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.740 03:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.308 00:21:21.308 03:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.308 03:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.308 03:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.308 03:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.308 03:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.308 03:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.308 03:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.308 03:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.308 03:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.308 { 00:21:21.308 "cntlid": 139, 00:21:21.308 "qid": 0, 00:21:21.308 "state": "enabled", 00:21:21.308 "thread": "nvmf_tgt_poll_group_000", 00:21:21.308 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:21.308 "listen_address": { 00:21:21.308 "trtype": "TCP", 00:21:21.308 "adrfam": "IPv4", 00:21:21.308 "traddr": "10.0.0.2", 00:21:21.308 "trsvcid": "4420" 00:21:21.308 }, 00:21:21.308 "peer_address": { 00:21:21.308 "trtype": "TCP", 00:21:21.308 "adrfam": "IPv4", 00:21:21.308 "traddr": "10.0.0.1", 00:21:21.308 "trsvcid": "56318" 00:21:21.308 }, 00:21:21.308 "auth": { 00:21:21.308 "state": "completed", 00:21:21.308 "digest": "sha512", 00:21:21.308 "dhgroup": "ffdhe8192" 00:21:21.308 } 00:21:21.308 } 00:21:21.308 ]' 00:21:21.308 03:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.308 03:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.308 03:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.567 03:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:21.567 03:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.567 03:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.567 03:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.567 03:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.567 03:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: --dhchap-ctrl-secret DHHC-1:02:ZmJlMWNmMjAzOTRmYTMwNDY1ZjA3ZTNmYjBhOTNlNGQzMDQyMGJkNzZmYjU2MDMwspixgw==: 00:21:21.567 03:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: --dhchap-ctrl-secret DHHC-1:02:ZmJlMWNmMjAzOTRmYTMwNDY1ZjA3ZTNmYjBhOTNlNGQzMDQyMGJkNzZmYjU2MDMwspixgw==: 00:21:22.135 03:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.135 03:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:22.135 03:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.135 03:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.135 03:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.135 03:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.135 03:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:22.135 03:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:22.394 03:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:22.394 03:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.394 03:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:22.394 03:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:22.394 03:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:22.394 03:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.394 03:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.394 03:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.394 03:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.394 03:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.394 03:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.394 03:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.394 03:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.962 00:21:22.962 03:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.962 03:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.962 03:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.221 03:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.221 03:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.221 03:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.221 03:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.221 03:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.221 03:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.221 { 00:21:23.221 "cntlid": 141, 00:21:23.221 "qid": 0, 00:21:23.221 "state": "enabled", 00:21:23.221 "thread": "nvmf_tgt_poll_group_000", 00:21:23.221 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:23.221 "listen_address": { 00:21:23.221 "trtype": "TCP", 00:21:23.221 "adrfam": "IPv4", 00:21:23.221 "traddr": "10.0.0.2", 00:21:23.221 "trsvcid": "4420" 00:21:23.221 }, 00:21:23.221 "peer_address": { 00:21:23.221 "trtype": "TCP", 00:21:23.221 "adrfam": "IPv4", 00:21:23.221 "traddr": "10.0.0.1", 00:21:23.221 "trsvcid": "56354" 00:21:23.221 }, 00:21:23.221 "auth": { 00:21:23.221 "state": "completed", 00:21:23.221 "digest": "sha512", 00:21:23.221 "dhgroup": "ffdhe8192" 00:21:23.221 } 00:21:23.221 } 00:21:23.221 ]' 00:21:23.221 03:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.221 03:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.221 03:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.221 03:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:23.221 03:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.221 03:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.221 03:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.221 03:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.480 03:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZmZDUzN2JjMjEwNDQ1ZTg1ZDI5ZTFhNTU5ODczZjEuFw/4: 00:21:23.480 03:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:01:M2ZmZDUzN2JjMjEwNDQ1ZTg1ZDI5ZTFhNTU5ODczZjEuFw/4: 00:21:24.050 03:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.050 03:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:24.050 03:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.050 03:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.050 03:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.050 03:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.050 03:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:24.050 03:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:24.310 03:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:24.310 03:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.310 03:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:24.310 03:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:24.310 03:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:24.310 03:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.310 03:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:24.310 03:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.310 03:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.310 03:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.310 03:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:24.310 03:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:24.310 03:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:24.878 00:21:24.878 03:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.878 03:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.878 03:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.878 03:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.878 03:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.878 03:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.878 03:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.878 03:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.878 03:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.878 { 00:21:24.878 "cntlid": 143, 00:21:24.878 "qid": 0, 00:21:24.878 "state": "enabled", 00:21:24.878 "thread": "nvmf_tgt_poll_group_000", 00:21:24.878 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:24.878 "listen_address": { 00:21:24.878 "trtype": "TCP", 00:21:24.878 "adrfam": "IPv4", 00:21:24.878 "traddr": "10.0.0.2", 00:21:24.878 "trsvcid": "4420" 00:21:24.878 }, 00:21:24.878 "peer_address": { 00:21:24.878 "trtype": "TCP", 00:21:24.878 "adrfam": "IPv4", 00:21:24.878 "traddr": "10.0.0.1", 00:21:24.878 "trsvcid": "56372" 00:21:24.878 }, 00:21:24.878 "auth": { 00:21:24.878 "state": "completed", 00:21:24.878 "digest": "sha512", 00:21:24.878 "dhgroup": "ffdhe8192" 00:21:24.878 } 00:21:24.878 } 00:21:24.878 ]' 00:21:24.878 03:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.878 03:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.878 03:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.138 03:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:25.138 03:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.138 03:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.138 03:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.138 03:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.397 03:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:21:25.397 03:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:21:25.965 03:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.965 03:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:25.965 03:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.965 03:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.965 03:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.965 03:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:25.965 03:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:25.965 03:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:25.965 03:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:25.965 03:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:25.965 03:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:25.965 03:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:25.965 03:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.965 03:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:25.965 03:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:25.965 03:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:25.965 03:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.965 03:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.965 03:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.965 03:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.965 03:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.965 03:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.965 03:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.965 03:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.531 00:21:26.531 03:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.531 03:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.531 03:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.789 03:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.789 03:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.789 03:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.789 03:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.789 03:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.789 03:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.789 { 00:21:26.789 "cntlid": 145, 00:21:26.789 "qid": 0, 00:21:26.789 "state": "enabled", 00:21:26.789 "thread": "nvmf_tgt_poll_group_000", 00:21:26.789 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:26.789 "listen_address": { 00:21:26.789 "trtype": "TCP", 00:21:26.789 "adrfam": "IPv4", 00:21:26.789 "traddr": "10.0.0.2", 00:21:26.789 "trsvcid": "4420" 00:21:26.789 }, 00:21:26.789 "peer_address": { 00:21:26.789 "trtype": "TCP", 00:21:26.789 "adrfam": "IPv4", 00:21:26.789 "traddr": "10.0.0.1", 00:21:26.789 "trsvcid": "56404" 00:21:26.789 }, 00:21:26.790 "auth": { 00:21:26.790 "state": "completed", 00:21:26.790 "digest": "sha512", 00:21:26.790 "dhgroup": "ffdhe8192" 00:21:26.790 } 00:21:26.790 } 00:21:26.790 ]' 00:21:26.790 03:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.790 03:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.790 03:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.790 03:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:26.790 03:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.790 03:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.790 03:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.790 03:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.048 03:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:21:27.048 03:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NTM5ZGE0MWJlM2RlYWQxNmJiN2RhODAxOWVkNWFlZjFiMjAyNmE3ZTU0MjU4NTYwYpwPZg==: --dhchap-ctrl-secret DHHC-1:03:NTIwZjQzM2E3MWUyZmU0NDRiZWFlMTZjYmZjZGUzMDk4NWRiMDNhZDUzNTg4NGQ4Y2EzYjM5OWE1ZmU3MWZiNAQ83M0=: 00:21:27.654 03:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.654 03:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:27.654 03:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.654 03:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.654 03:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.654 03:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:21:27.654 03:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.654 03:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.654 03:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.654 03:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:27.654 03:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:27.654 03:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:27.654 03:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:27.654 03:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:27.654 03:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:27.654 03:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:27.654 03:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:27.654 03:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:27.654 03:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:28.300 request: 00:21:28.300 { 00:21:28.300 "name": "nvme0", 00:21:28.300 "trtype": "tcp", 00:21:28.300 "traddr": "10.0.0.2", 00:21:28.300 "adrfam": "ipv4", 00:21:28.300 "trsvcid": "4420", 00:21:28.300 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:28.300 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:28.300 "prchk_reftag": false, 00:21:28.300 "prchk_guard": false, 00:21:28.300 "hdgst": false, 00:21:28.300 "ddgst": false, 00:21:28.300 "dhchap_key": "key2", 00:21:28.300 "allow_unrecognized_csi": false, 00:21:28.300 "method": "bdev_nvme_attach_controller", 00:21:28.300 "req_id": 1 00:21:28.300 } 00:21:28.300 Got JSON-RPC error response 00:21:28.300 response: 00:21:28.300 { 00:21:28.300 "code": -5, 00:21:28.300 "message": "Input/output error" 00:21:28.300 } 00:21:28.300 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:28.300 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:28.300 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:28.300 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:28.300 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:28.300 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.300 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.300 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.300 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.300 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.300 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.300 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.300 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:28.300 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:28.300 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:28.300 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:28.300 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:28.300 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:28.300 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:28.300 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:28.300 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:28.300 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:28.559 request: 00:21:28.559 { 00:21:28.559 "name": "nvme0", 00:21:28.559 "trtype": "tcp", 00:21:28.559 "traddr": "10.0.0.2", 00:21:28.559 "adrfam": "ipv4", 00:21:28.559 "trsvcid": "4420", 00:21:28.559 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:28.559 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:28.559 "prchk_reftag": false, 00:21:28.559 "prchk_guard": false, 00:21:28.559 "hdgst": false, 00:21:28.559 "ddgst": false, 00:21:28.559 "dhchap_key": "key1", 00:21:28.559 "dhchap_ctrlr_key": "ckey2", 00:21:28.559 "allow_unrecognized_csi": false, 00:21:28.559 "method": "bdev_nvme_attach_controller", 00:21:28.559 "req_id": 1 00:21:28.559 } 00:21:28.559 Got JSON-RPC error response 00:21:28.559 response: 00:21:28.559 { 00:21:28.559 "code": -5, 00:21:28.559 "message": "Input/output error" 00:21:28.559 } 00:21:28.559 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:28.559 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:28.559 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:28.559 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:28.559 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:28.559 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.559 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.559 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.560 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:21:28.560 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.560 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.560 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.560 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.560 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:28.560 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.560 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:28.560 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:28.560 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:28.560 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:28.560 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.560 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.560 03:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.128 request: 00:21:29.128 { 00:21:29.128 "name": "nvme0", 00:21:29.128 "trtype": "tcp", 00:21:29.128 "traddr": "10.0.0.2", 00:21:29.128 "adrfam": "ipv4", 00:21:29.128 "trsvcid": "4420", 00:21:29.128 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:29.128 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:29.128 "prchk_reftag": false, 00:21:29.128 "prchk_guard": false, 00:21:29.128 "hdgst": false, 00:21:29.128 "ddgst": false, 00:21:29.128 "dhchap_key": "key1", 00:21:29.128 "dhchap_ctrlr_key": "ckey1", 00:21:29.128 "allow_unrecognized_csi": false, 00:21:29.128 "method": "bdev_nvme_attach_controller", 00:21:29.128 "req_id": 1 00:21:29.128 } 00:21:29.128 Got JSON-RPC error response 00:21:29.128 response: 00:21:29.128 { 00:21:29.128 "code": -5, 00:21:29.128 "message": "Input/output error" 00:21:29.128 } 00:21:29.128 03:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:29.128 03:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:29.128 03:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:29.128 03:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:29.128 03:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:29.128 03:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.128 03:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.128 03:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.128 03:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2671633 00:21:29.128 03:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2671633 ']' 00:21:29.128 03:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2671633 00:21:29.128 03:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:29.128 03:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:29.128 03:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2671633 00:21:29.128 03:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:29.128 03:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:29.128 03:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2671633' 00:21:29.128 killing process with pid 2671633 00:21:29.128 03:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2671633 00:21:29.128 03:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2671633 00:21:30.509 03:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:30.509 03:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:30.509 03:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:30.509 03:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.509 03:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2693012 00:21:30.509 03:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2693012 00:21:30.509 03:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:30.509 03:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2693012 ']' 00:21:30.509 03:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.509 03:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:30.509 03:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.509 03:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:30.509 03:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.078 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:31.078 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:31.078 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:31.078 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:31.078 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.337 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:31.337 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:31.337 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2693012 00:21:31.337 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2693012 ']' 00:21:31.337 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.337 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:31.337 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.337 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:31.337 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.337 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:31.337 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:31.337 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:21:31.337 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.337 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.906 null0 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.yOt 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.zdr ]] 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zdr 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.BAG 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.gbr ]] 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gbr 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.lhP 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.0Wk ]] 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.0Wk 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.I40 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.906 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:31.907 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:31.907 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:31.907 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.907 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:31.907 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.907 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.907 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.907 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:31.907 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:31.907 03:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:32.475 nvme0n1 00:21:32.734 03:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:32.734 03:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:32.734 03:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.734 03:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.734 03:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.734 03:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.734 03:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.734 03:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.734 03:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:32.734 { 00:21:32.734 "cntlid": 1, 00:21:32.734 "qid": 0, 00:21:32.734 "state": "enabled", 00:21:32.734 "thread": "nvmf_tgt_poll_group_000", 00:21:32.734 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:32.734 "listen_address": { 00:21:32.734 "trtype": "TCP", 00:21:32.734 "adrfam": "IPv4", 00:21:32.734 "traddr": "10.0.0.2", 00:21:32.734 "trsvcid": "4420" 00:21:32.734 }, 00:21:32.734 "peer_address": { 00:21:32.734 "trtype": "TCP", 00:21:32.734 "adrfam": "IPv4", 00:21:32.734 "traddr": "10.0.0.1", 00:21:32.734 "trsvcid": "57986" 00:21:32.734 }, 00:21:32.734 "auth": { 00:21:32.734 "state": "completed", 00:21:32.734 "digest": "sha512", 00:21:32.734 "dhgroup": "ffdhe8192" 00:21:32.734 } 00:21:32.734 } 00:21:32.734 ]' 00:21:32.734 03:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:32.734 03:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.734 03:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:32.993 03:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:32.993 03:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:32.993 03:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.993 03:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.993 03:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.254 03:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:21:33.254 03:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:21:33.824 03:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.824 03:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:33.824 03:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.824 03:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.824 03:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.824 03:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:33.824 03:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.824 03:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.824 03:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.824 03:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:33.824 03:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:33.824 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:33.824 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:33.824 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:33.824 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:33.824 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:33.824 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:33.824 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:33.824 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:33.824 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:33.824 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:34.085 request: 00:21:34.085 { 00:21:34.085 "name": "nvme0", 00:21:34.085 "trtype": "tcp", 00:21:34.085 "traddr": "10.0.0.2", 00:21:34.085 "adrfam": "ipv4", 00:21:34.085 "trsvcid": "4420", 00:21:34.085 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:34.085 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:34.085 "prchk_reftag": false, 00:21:34.085 "prchk_guard": false, 00:21:34.085 "hdgst": false, 00:21:34.085 "ddgst": false, 00:21:34.085 "dhchap_key": "key3", 00:21:34.085 "allow_unrecognized_csi": false, 00:21:34.085 "method": "bdev_nvme_attach_controller", 00:21:34.085 "req_id": 1 00:21:34.085 } 00:21:34.085 Got JSON-RPC error response 00:21:34.085 response: 00:21:34.085 { 00:21:34.085 "code": -5, 00:21:34.085 "message": "Input/output error" 00:21:34.085 } 00:21:34.085 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:34.085 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:34.085 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:34.085 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:34.085 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:21:34.085 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:21:34.085 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:34.085 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:34.349 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:34.349 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:34.349 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:34.349 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:34.349 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:34.349 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:34.349 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:34.349 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:34.349 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:34.349 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:34.609 request: 00:21:34.609 { 00:21:34.609 "name": "nvme0", 00:21:34.609 "trtype": "tcp", 00:21:34.609 "traddr": "10.0.0.2", 00:21:34.609 "adrfam": "ipv4", 00:21:34.609 "trsvcid": "4420", 00:21:34.609 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:34.609 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:34.609 "prchk_reftag": false, 00:21:34.609 "prchk_guard": false, 00:21:34.609 "hdgst": false, 00:21:34.609 "ddgst": false, 00:21:34.609 "dhchap_key": "key3", 00:21:34.609 "allow_unrecognized_csi": false, 00:21:34.609 "method": "bdev_nvme_attach_controller", 00:21:34.609 "req_id": 1 00:21:34.609 } 00:21:34.609 Got JSON-RPC error response 00:21:34.609 response: 00:21:34.609 { 00:21:34.609 "code": -5, 00:21:34.609 "message": "Input/output error" 00:21:34.609 } 00:21:34.609 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:34.609 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:34.609 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:34.609 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:34.609 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:34.609 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:21:34.609 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:34.609 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:34.609 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:34.609 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:34.869 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:34.869 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.869 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.869 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.869 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:34.869 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.869 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.869 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.869 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:34.869 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:34.869 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:34.869 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:34.869 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:34.869 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:34.869 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:34.869 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:34.869 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:34.869 03:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:35.128 request: 00:21:35.128 { 00:21:35.128 "name": "nvme0", 00:21:35.128 "trtype": "tcp", 00:21:35.128 "traddr": "10.0.0.2", 00:21:35.128 "adrfam": "ipv4", 00:21:35.128 "trsvcid": "4420", 00:21:35.128 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:35.128 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:35.128 "prchk_reftag": false, 00:21:35.128 "prchk_guard": false, 00:21:35.128 "hdgst": false, 00:21:35.128 "ddgst": false, 00:21:35.128 "dhchap_key": "key0", 00:21:35.128 "dhchap_ctrlr_key": "key1", 00:21:35.128 "allow_unrecognized_csi": false, 00:21:35.128 "method": "bdev_nvme_attach_controller", 00:21:35.128 "req_id": 1 00:21:35.128 } 00:21:35.128 Got JSON-RPC error response 00:21:35.128 response: 00:21:35.128 { 00:21:35.128 "code": -5, 00:21:35.128 "message": "Input/output error" 00:21:35.128 } 00:21:35.128 03:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:35.128 03:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:35.128 03:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:35.128 03:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:35.128 03:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:21:35.128 03:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:35.128 03:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:35.387 nvme0n1 00:21:35.387 03:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:21:35.387 03:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:21:35.387 03:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.647 03:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.647 03:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.647 03:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.647 03:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:21:35.647 03:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.647 03:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.647 03:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.647 03:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:35.647 03:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:35.647 03:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:36.584 nvme0n1 00:21:36.584 03:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:21:36.584 03:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:21:36.584 03:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.584 03:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.584 03:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:36.584 03:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.584 03:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.584 03:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.584 03:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:21:36.584 03:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:21:36.584 03:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.843 03:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.843 03:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:21:36.843 03:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: --dhchap-ctrl-secret DHHC-1:03:M2ZhNmFkOWQ5MzFjNDNmN2FlMzAzODhlMWNkMmM0MGE2ZmY3MGZjZWI3ZjQzOTlmMDkxOGQwYzUyMWU3NDlhNLIiQkg=: 00:21:37.411 03:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:21:37.411 03:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:21:37.411 03:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:21:37.411 03:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:21:37.411 03:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:21:37.411 03:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:21:37.411 03:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:21:37.411 03:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.411 03:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.670 03:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:21:37.670 03:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:37.670 03:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:21:37.670 03:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:37.670 03:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:37.670 03:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:37.670 03:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:37.670 03:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:37.670 03:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:37.670 03:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:38.238 request: 00:21:38.238 { 00:21:38.238 "name": "nvme0", 00:21:38.238 "trtype": "tcp", 00:21:38.238 "traddr": "10.0.0.2", 00:21:38.238 "adrfam": "ipv4", 00:21:38.238 "trsvcid": "4420", 00:21:38.238 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:38.239 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:38.239 "prchk_reftag": false, 00:21:38.239 "prchk_guard": false, 00:21:38.239 "hdgst": false, 00:21:38.239 "ddgst": false, 00:21:38.239 "dhchap_key": "key1", 00:21:38.239 "allow_unrecognized_csi": false, 00:21:38.239 "method": "bdev_nvme_attach_controller", 00:21:38.239 "req_id": 1 00:21:38.239 } 00:21:38.239 Got JSON-RPC error response 00:21:38.239 response: 00:21:38.239 { 00:21:38.239 "code": -5, 00:21:38.239 "message": "Input/output error" 00:21:38.239 } 00:21:38.239 03:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:38.239 03:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:38.239 03:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:38.239 03:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:38.239 03:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:38.239 03:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:38.239 03:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:38.808 nvme0n1 00:21:38.808 03:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:21:38.808 03:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:21:38.808 03:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.068 03:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.068 03:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.068 03:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.327 03:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:39.327 03:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.327 03:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.327 03:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.327 03:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:21:39.327 03:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:39.327 03:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:39.586 nvme0n1 00:21:39.586 03:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:21:39.586 03:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:21:39.586 03:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.586 03:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.586 03:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.586 03:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.845 03:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:39.845 03:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.845 03:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.845 03:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.845 03:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: '' 2s 00:21:39.845 03:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:39.845 03:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:39.845 03:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: 00:21:39.845 03:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:21:39.845 03:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:39.845 03:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:39.845 03:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: ]] 00:21:39.845 03:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YjhkMjcyNmFhOWU5ZTIzNTNkNGMyODZhYzExNTAyMTDegHDn: 00:21:39.845 03:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:21:39.845 03:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:39.845 03:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:42.380 03:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:21:42.380 03:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:42.380 03:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:42.380 03:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:42.380 03:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:42.380 03:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:42.380 03:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:42.380 03:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:21:42.380 03:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.380 03:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.380 03:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.380 03:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: 2s 00:21:42.380 03:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:42.380 03:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:42.380 03:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:21:42.380 03:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: 00:21:42.380 03:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:42.380 03:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:42.380 03:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:21:42.380 03:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: ]] 00:21:42.380 03:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MjVhM2FkYmMzY2Y0Mzg3ZTNiYzViOWMxOTEwZGE4Nzg2MTI1N2QxYzY2ZmJlNTVmNuw++Q==: 00:21:42.380 03:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:42.380 03:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:44.286 03:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:21:44.286 03:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:44.286 03:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:44.286 03:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:44.286 03:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:44.286 03:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:44.286 03:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:44.286 03:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.286 03:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:44.286 03:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.286 03:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.286 03:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.286 03:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:44.286 03:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:44.286 03:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:44.854 nvme0n1 00:21:44.854 03:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:44.854 03:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.855 03:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.855 03:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.855 03:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:44.855 03:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:45.113 03:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:21:45.113 03:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:21:45.113 03:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.373 03:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.373 03:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:45.373 03:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.373 03:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.373 03:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.373 03:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:21:45.373 03:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:21:45.632 03:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:21:45.632 03:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:21:45.632 03:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.891 03:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.891 03:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:45.891 03:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.891 03:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.891 03:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.891 03:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:45.891 03:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:45.891 03:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:45.891 03:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:45.891 03:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:45.891 03:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:45.891 03:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:45.891 03:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:45.891 03:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:46.153 request: 00:21:46.153 { 00:21:46.153 "name": "nvme0", 00:21:46.154 "dhchap_key": "key1", 00:21:46.154 "dhchap_ctrlr_key": "key3", 00:21:46.154 "method": "bdev_nvme_set_keys", 00:21:46.154 "req_id": 1 00:21:46.154 } 00:21:46.154 Got JSON-RPC error response 00:21:46.154 response: 00:21:46.154 { 00:21:46.154 "code": -13, 00:21:46.154 "message": "Permission denied" 00:21:46.154 } 00:21:46.154 03:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:46.154 03:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:46.154 03:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:46.154 03:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:46.154 03:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:46.154 03:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:46.154 03:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.414 03:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:21:46.414 03:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:21:47.351 03:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:47.351 03:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.351 03:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:47.610 03:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:21:47.610 03:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:47.610 03:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.610 03:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.610 03:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.610 03:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:47.610 03:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:47.610 03:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:48.547 nvme0n1 00:21:48.547 03:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:48.547 03:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.547 03:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.547 03:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.547 03:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:48.547 03:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:48.547 03:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:48.547 03:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:48.547 03:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:48.547 03:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:48.547 03:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:48.547 03:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:48.547 03:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:48.806 request: 00:21:48.806 { 00:21:48.806 "name": "nvme0", 00:21:48.806 "dhchap_key": "key2", 00:21:48.806 "dhchap_ctrlr_key": "key0", 00:21:48.806 "method": "bdev_nvme_set_keys", 00:21:48.806 "req_id": 1 00:21:48.806 } 00:21:48.806 Got JSON-RPC error response 00:21:48.806 response: 00:21:48.806 { 00:21:48.806 "code": -13, 00:21:48.806 "message": "Permission denied" 00:21:48.806 } 00:21:48.806 03:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:48.806 03:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:48.806 03:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:48.806 03:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:48.806 03:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:48.806 03:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.806 03:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:49.064 03:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:21:49.064 03:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:21:49.997 03:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:49.997 03:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:49.997 03:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.255 03:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:21:50.255 03:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:21:50.255 03:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:21:50.255 03:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2671867 00:21:50.255 03:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2671867 ']' 00:21:50.255 03:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2671867 00:21:50.255 03:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:50.255 03:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:50.255 03:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2671867 00:21:50.255 03:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:50.255 03:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:50.255 03:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2671867' 00:21:50.255 killing process with pid 2671867 00:21:50.255 03:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2671867 00:21:50.255 03:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2671867 00:21:52.784 03:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:52.784 03:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:52.784 03:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:21:52.784 03:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:52.784 03:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:21:52.784 03:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:52.784 03:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:52.784 rmmod nvme_tcp 00:21:52.784 rmmod nvme_fabrics 00:21:52.784 rmmod nvme_keyring 00:21:52.784 03:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:52.784 03:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:21:52.784 03:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:21:52.784 03:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2693012 ']' 00:21:52.784 03:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2693012 00:21:52.784 03:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2693012 ']' 00:21:52.784 03:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2693012 00:21:52.784 03:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:52.784 03:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:52.784 03:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2693012 00:21:52.784 03:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:52.784 03:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:52.784 03:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2693012' 00:21:52.784 killing process with pid 2693012 00:21:52.784 03:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2693012 00:21:52.784 03:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2693012 00:21:53.719 03:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:53.719 03:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:53.719 03:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:53.719 03:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:21:53.719 03:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:21:53.719 03:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:53.719 03:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:21:53.719 03:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:53.719 03:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:53.719 03:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.719 03:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:53.719 03:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.250 03:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:56.250 03:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.yOt /tmp/spdk.key-sha256.BAG /tmp/spdk.key-sha384.lhP /tmp/spdk.key-sha512.I40 /tmp/spdk.key-sha512.zdr /tmp/spdk.key-sha384.gbr /tmp/spdk.key-sha256.0Wk '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:56.250 00:21:56.250 real 2m36.196s 00:21:56.250 user 5m57.147s 00:21:56.250 sys 0m23.368s 00:21:56.250 03:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:56.250 03:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.250 ************************************ 00:21:56.250 END TEST nvmf_auth_target 00:21:56.250 ************************************ 00:21:56.250 03:32:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:21:56.250 03:32:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:56.250 03:32:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:56.250 03:32:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:56.250 03:32:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:56.250 ************************************ 00:21:56.250 START TEST nvmf_bdevio_no_huge 00:21:56.250 ************************************ 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:56.250 * Looking for test storage... 00:21:56.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:56.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.250 --rc genhtml_branch_coverage=1 00:21:56.250 --rc genhtml_function_coverage=1 00:21:56.250 --rc genhtml_legend=1 00:21:56.250 --rc geninfo_all_blocks=1 00:21:56.250 --rc geninfo_unexecuted_blocks=1 00:21:56.250 00:21:56.250 ' 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:56.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.250 --rc genhtml_branch_coverage=1 00:21:56.250 --rc genhtml_function_coverage=1 00:21:56.250 --rc genhtml_legend=1 00:21:56.250 --rc geninfo_all_blocks=1 00:21:56.250 --rc geninfo_unexecuted_blocks=1 00:21:56.250 00:21:56.250 ' 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:56.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.250 --rc genhtml_branch_coverage=1 00:21:56.250 --rc genhtml_function_coverage=1 00:21:56.250 --rc genhtml_legend=1 00:21:56.250 --rc geninfo_all_blocks=1 00:21:56.250 --rc geninfo_unexecuted_blocks=1 00:21:56.250 00:21:56.250 ' 00:21:56.250 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:56.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.250 --rc genhtml_branch_coverage=1 00:21:56.250 --rc genhtml_function_coverage=1 00:21:56.250 --rc genhtml_legend=1 00:21:56.250 --rc geninfo_all_blocks=1 00:21:56.250 --rc geninfo_unexecuted_blocks=1 00:21:56.250 00:21:56.250 ' 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:56.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:21:56.251 03:32:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:01.518 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:01.518 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:01.518 Found net devices under 0000:af:00.0: cvl_0_0 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:01.518 Found net devices under 0000:af:00.1: cvl_0_1 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:01.518 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:01.519 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:01.519 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:01.519 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:01.519 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:01.519 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:01.519 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:01.519 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:01.519 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:01.519 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:01.519 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:01.519 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:01.519 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:01.519 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:22:01.519 00:22:01.519 --- 10.0.0.2 ping statistics --- 00:22:01.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.519 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:22:01.519 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:01.519 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:01.519 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:22:01.519 00:22:01.519 --- 10.0.0.1 ping statistics --- 00:22:01.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.519 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:22:01.519 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:01.519 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:01.519 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:01.519 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:01.519 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:01.519 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:01.519 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:01.519 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:01.519 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:01.777 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:01.777 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:01.777 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:01.777 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:01.777 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2700517 00:22:01.777 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2700517 00:22:01.777 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:01.777 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2700517 ']' 00:22:01.777 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.777 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:01.777 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.777 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:01.777 03:33:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:01.777 [2024-12-13 03:33:02.827150] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:22:01.777 [2024-12-13 03:33:02.827299] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:01.777 [2024-12-13 03:33:02.963282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:02.035 [2024-12-13 03:33:03.081433] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:02.035 [2024-12-13 03:33:03.081477] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:02.035 [2024-12-13 03:33:03.081488] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:02.035 [2024-12-13 03:33:03.081498] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:02.035 [2024-12-13 03:33:03.081506] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:02.035 [2024-12-13 03:33:03.083537] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:22:02.035 [2024-12-13 03:33:03.083629] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:22:02.035 [2024-12-13 03:33:03.083692] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:22:02.035 [2024-12-13 03:33:03.083715] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:22:02.602 03:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:02.602 03:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:22:02.602 03:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:02.602 03:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:02.602 03:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:02.602 03:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:02.602 03:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:02.602 03:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.602 03:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:02.602 [2024-12-13 03:33:03.686044] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:02.602 03:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.602 03:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:02.602 03:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.602 03:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:02.602 Malloc0 00:22:02.602 03:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.602 03:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:02.602 03:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.602 03:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:02.602 03:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.602 03:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:02.602 03:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.602 03:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:02.602 03:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.602 03:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:02.602 03:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.602 03:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:02.602 [2024-12-13 03:33:03.785682] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:02.602 03:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.602 03:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:02.602 03:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:02.602 03:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:02.602 03:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:02.602 03:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.602 03:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.602 { 00:22:02.602 "params": { 00:22:02.602 "name": "Nvme$subsystem", 00:22:02.602 "trtype": "$TEST_TRANSPORT", 00:22:02.602 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.602 "adrfam": "ipv4", 00:22:02.602 "trsvcid": "$NVMF_PORT", 00:22:02.602 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.602 "hdgst": ${hdgst:-false}, 00:22:02.602 "ddgst": ${ddgst:-false} 00:22:02.602 }, 00:22:02.602 "method": "bdev_nvme_attach_controller" 00:22:02.602 } 00:22:02.602 EOF 00:22:02.602 )") 00:22:02.602 03:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:02.602 03:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:02.602 03:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:02.602 03:33:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:02.602 "params": { 00:22:02.602 "name": "Nvme1", 00:22:02.602 "trtype": "tcp", 00:22:02.602 "traddr": "10.0.0.2", 00:22:02.602 "adrfam": "ipv4", 00:22:02.602 "trsvcid": "4420", 00:22:02.602 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:02.602 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:02.602 "hdgst": false, 00:22:02.602 "ddgst": false 00:22:02.602 }, 00:22:02.602 "method": "bdev_nvme_attach_controller" 00:22:02.602 }' 00:22:02.860 [2024-12-13 03:33:03.862415] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:22:02.860 [2024-12-13 03:33:03.862502] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2700662 ] 00:22:02.860 [2024-12-13 03:33:03.991078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:03.119 [2024-12-13 03:33:04.109799] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.119 [2024-12-13 03:33:04.109872] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.119 [2024-12-13 03:33:04.109878] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:03.686 I/O targets: 00:22:03.686 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:03.686 00:22:03.686 00:22:03.686 CUnit - A unit testing framework for C - Version 2.1-3 00:22:03.686 http://cunit.sourceforge.net/ 00:22:03.686 00:22:03.686 00:22:03.686 Suite: bdevio tests on: Nvme1n1 00:22:03.686 Test: blockdev write read block ...passed 00:22:03.686 Test: blockdev write zeroes read block ...passed 00:22:03.686 Test: blockdev write zeroes read no split ...passed 00:22:03.686 Test: blockdev write zeroes read split ...passed 00:22:03.945 Test: blockdev write zeroes read split partial ...passed 00:22:03.945 Test: blockdev reset ...[2024-12-13 03:33:04.938953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:03.945 [2024-12-13 03:33:04.939055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000323a00 (9): Bad file descriptor 00:22:03.945 [2024-12-13 03:33:04.956124] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:03.945 passed 00:22:03.945 Test: blockdev write read 8 blocks ...passed 00:22:03.945 Test: blockdev write read size > 128k ...passed 00:22:03.945 Test: blockdev write read invalid size ...passed 00:22:03.945 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:03.945 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:03.945 Test: blockdev write read max offset ...passed 00:22:03.945 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:03.945 Test: blockdev writev readv 8 blocks ...passed 00:22:03.945 Test: blockdev writev readv 30 x 1block ...passed 00:22:04.203 Test: blockdev writev readv block ...passed 00:22:04.203 Test: blockdev writev readv size > 128k ...passed 00:22:04.203 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:04.203 Test: blockdev comparev and writev ...[2024-12-13 03:33:05.168391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:04.203 [2024-12-13 03:33:05.168437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.203 [2024-12-13 03:33:05.168457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:04.203 [2024-12-13 03:33:05.168468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:04.203 [2024-12-13 03:33:05.168766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:04.203 [2024-12-13 03:33:05.168781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:04.203 [2024-12-13 03:33:05.168797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:04.203 [2024-12-13 03:33:05.168807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:04.203 [2024-12-13 03:33:05.169109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:04.203 [2024-12-13 03:33:05.169125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:04.203 [2024-12-13 03:33:05.169141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:04.203 [2024-12-13 03:33:05.169151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:04.203 [2024-12-13 03:33:05.169439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:04.203 [2024-12-13 03:33:05.169455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:04.203 [2024-12-13 03:33:05.169471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:04.203 [2024-12-13 03:33:05.169481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:04.203 passed 00:22:04.203 Test: blockdev nvme passthru rw ...passed 00:22:04.203 Test: blockdev nvme passthru vendor specific ...[2024-12-13 03:33:05.251282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:04.203 [2024-12-13 03:33:05.251313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:04.203 [2024-12-13 03:33:05.251453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:04.203 [2024-12-13 03:33:05.251476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:04.203 [2024-12-13 03:33:05.251615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:04.203 [2024-12-13 03:33:05.251629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:04.203 [2024-12-13 03:33:05.251755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:04.203 [2024-12-13 03:33:05.251769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:04.203 passed 00:22:04.203 Test: blockdev nvme admin passthru ...passed 00:22:04.203 Test: blockdev copy ...passed 00:22:04.203 00:22:04.203 Run Summary: Type Total Ran Passed Failed Inactive 00:22:04.203 suites 1 1 n/a 0 0 00:22:04.203 tests 23 23 23 0 0 00:22:04.203 asserts 152 152 152 0 n/a 00:22:04.203 00:22:04.203 Elapsed time = 1.182 seconds 00:22:04.770 03:33:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:04.770 03:33:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.770 03:33:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:04.770 03:33:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.770 03:33:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:04.770 03:33:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:04.770 03:33:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:04.770 03:33:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:04.770 03:33:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:04.770 03:33:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:04.770 03:33:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:04.770 03:33:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:04.770 rmmod nvme_tcp 00:22:05.028 rmmod nvme_fabrics 00:22:05.028 rmmod nvme_keyring 00:22:05.028 03:33:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:05.028 03:33:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:05.028 03:33:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:05.028 03:33:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2700517 ']' 00:22:05.028 03:33:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2700517 00:22:05.028 03:33:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2700517 ']' 00:22:05.028 03:33:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2700517 00:22:05.028 03:33:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:22:05.028 03:33:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:05.028 03:33:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2700517 00:22:05.028 03:33:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:22:05.028 03:33:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:22:05.028 03:33:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2700517' 00:22:05.028 killing process with pid 2700517 00:22:05.028 03:33:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2700517 00:22:05.028 03:33:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2700517 00:22:05.962 03:33:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:05.962 03:33:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:05.962 03:33:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:05.962 03:33:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:05.962 03:33:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:05.962 03:33:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:05.962 03:33:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:05.962 03:33:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:05.962 03:33:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:05.962 03:33:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.962 03:33:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:05.962 03:33:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.863 03:33:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:07.863 00:22:07.863 real 0m11.914s 00:22:07.863 user 0m20.721s 00:22:07.863 sys 0m5.254s 00:22:07.863 03:33:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:07.863 03:33:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:07.863 ************************************ 00:22:07.863 END TEST nvmf_bdevio_no_huge 00:22:07.863 ************************************ 00:22:07.863 03:33:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:07.863 03:33:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:07.863 03:33:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:07.863 03:33:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:07.863 ************************************ 00:22:07.863 START TEST nvmf_tls 00:22:07.863 ************************************ 00:22:07.863 03:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:08.121 * Looking for test storage... 00:22:08.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:08.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.121 --rc genhtml_branch_coverage=1 00:22:08.121 --rc genhtml_function_coverage=1 00:22:08.121 --rc genhtml_legend=1 00:22:08.121 --rc geninfo_all_blocks=1 00:22:08.121 --rc geninfo_unexecuted_blocks=1 00:22:08.121 00:22:08.121 ' 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:08.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.121 --rc genhtml_branch_coverage=1 00:22:08.121 --rc genhtml_function_coverage=1 00:22:08.121 --rc genhtml_legend=1 00:22:08.121 --rc geninfo_all_blocks=1 00:22:08.121 --rc geninfo_unexecuted_blocks=1 00:22:08.121 00:22:08.121 ' 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:08.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.121 --rc genhtml_branch_coverage=1 00:22:08.121 --rc genhtml_function_coverage=1 00:22:08.121 --rc genhtml_legend=1 00:22:08.121 --rc geninfo_all_blocks=1 00:22:08.121 --rc geninfo_unexecuted_blocks=1 00:22:08.121 00:22:08.121 ' 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:08.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.121 --rc genhtml_branch_coverage=1 00:22:08.121 --rc genhtml_function_coverage=1 00:22:08.121 --rc genhtml_legend=1 00:22:08.121 --rc geninfo_all_blocks=1 00:22:08.121 --rc geninfo_unexecuted_blocks=1 00:22:08.121 00:22:08.121 ' 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.121 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:08.122 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.122 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:08.122 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:08.122 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:08.122 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:08.122 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:08.122 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:08.122 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:08.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:08.122 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:08.122 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:08.122 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:08.122 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:08.122 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:08.122 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:08.122 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:08.122 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:08.122 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:08.122 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:08.122 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.122 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:08.122 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.122 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:08.122 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:08.122 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:08.122 03:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:13.385 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:13.385 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:13.385 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:13.386 Found net devices under 0000:af:00.0: cvl_0_0 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:13.386 Found net devices under 0000:af:00.1: cvl_0_1 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:13.386 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:13.645 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:13.645 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:13.645 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:13.645 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:13.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:13.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:22:13.645 00:22:13.645 --- 10.0.0.2 ping statistics --- 00:22:13.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.645 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:22:13.645 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:13.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:13.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:22:13.645 00:22:13.645 --- 10.0.0.1 ping statistics --- 00:22:13.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.645 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:22:13.645 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:13.645 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:22:13.645 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:13.645 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:13.645 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:13.645 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:13.645 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:13.645 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:13.645 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:13.645 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:13.645 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:13.645 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:13.645 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:13.645 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2704863 00:22:13.645 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2704863 00:22:13.645 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:13.645 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2704863 ']' 00:22:13.645 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.645 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:13.645 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.645 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:13.645 03:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:13.645 [2024-12-13 03:33:14.797408] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:22:13.645 [2024-12-13 03:33:14.797499] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:13.904 [2024-12-13 03:33:14.915251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.904 [2024-12-13 03:33:15.017859] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:13.904 [2024-12-13 03:33:15.017904] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:13.904 [2024-12-13 03:33:15.017915] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:13.904 [2024-12-13 03:33:15.017931] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:13.904 [2024-12-13 03:33:15.017939] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:13.904 [2024-12-13 03:33:15.019357] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:14.468 03:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:14.468 03:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:14.468 03:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:14.468 03:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:14.468 03:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:14.468 03:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:14.468 03:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:14.468 03:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:14.726 true 00:22:14.726 03:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:14.726 03:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:14.984 03:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:14.984 03:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:14.984 03:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:15.242 03:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:15.242 03:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:15.242 03:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:15.242 03:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:15.242 03:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:15.500 03:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:15.500 03:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:15.758 03:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:15.758 03:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:15.758 03:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:15.758 03:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:15.758 03:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:15.758 03:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:15.758 03:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:16.016 03:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:16.016 03:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:16.274 03:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:16.274 03:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:16.274 03:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:16.532 03:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:16.532 03:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:16.532 03:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:16.532 03:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:16.532 03:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:16.532 03:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:16.532 03:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:16.532 03:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:16.532 03:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:16.532 03:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:16.532 03:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:16.532 03:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:16.532 03:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:16.532 03:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:16.532 03:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:16.532 03:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:16.532 03:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:22:16.532 03:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:16.532 03:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:16.790 03:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:16.790 03:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:16.790 03:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.0HwdhAqQGd 00:22:16.790 03:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:16.790 03:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.yj9e9xSAzV 00:22:16.790 03:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:16.790 03:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:16.790 03:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.0HwdhAqQGd 00:22:16.790 03:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.yj9e9xSAzV 00:22:16.790 03:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:16.790 03:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:17.356 03:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.0HwdhAqQGd 00:22:17.356 03:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.0HwdhAqQGd 00:22:17.356 03:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:17.614 [2024-12-13 03:33:18.640330] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:17.614 03:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:17.872 03:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:17.872 [2024-12-13 03:33:19.001275] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:17.872 [2024-12-13 03:33:19.001557] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:17.872 03:33:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:18.130 malloc0 00:22:18.130 03:33:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:18.387 03:33:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.0HwdhAqQGd 00:22:18.645 03:33:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:18.645 03:33:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.0HwdhAqQGd 00:22:28.724 Initializing NVMe Controllers 00:22:28.724 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:28.724 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:28.724 Initialization complete. Launching workers. 00:22:28.724 ======================================================== 00:22:28.724 Latency(us) 00:22:28.724 Device Information : IOPS MiB/s Average min max 00:22:28.724 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12960.39 50.63 4938.47 1162.57 7426.39 00:22:28.724 ======================================================== 00:22:28.724 Total : 12960.39 50.63 4938.47 1162.57 7426.39 00:22:28.724 00:22:28.982 03:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0HwdhAqQGd 00:22:28.982 03:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:28.982 03:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:28.982 03:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:28.982 03:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.0HwdhAqQGd 00:22:28.982 03:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:28.982 03:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2707378 00:22:28.982 03:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:28.982 03:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2707378 /var/tmp/bdevperf.sock 00:22:28.982 03:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:28.982 03:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2707378 ']' 00:22:28.982 03:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:28.982 03:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:28.982 03:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:28.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:28.982 03:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:28.982 03:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:28.982 [2024-12-13 03:33:30.068812] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:22:28.982 [2024-12-13 03:33:30.068915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2707378 ] 00:22:28.982 [2024-12-13 03:33:30.177246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.241 [2024-12-13 03:33:30.289823] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:29.808 03:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:29.808 03:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:29.808 03:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0HwdhAqQGd 00:22:30.067 03:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:30.067 [2024-12-13 03:33:31.227140] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:30.326 TLSTESTn1 00:22:30.326 03:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:30.326 Running I/O for 10 seconds... 00:22:32.641 4585.00 IOPS, 17.91 MiB/s [2024-12-13T02:33:34.787Z] 4662.50 IOPS, 18.21 MiB/s [2024-12-13T02:33:35.721Z] 4727.00 IOPS, 18.46 MiB/s [2024-12-13T02:33:36.656Z] 4746.50 IOPS, 18.54 MiB/s [2024-12-13T02:33:37.593Z] 4732.40 IOPS, 18.49 MiB/s [2024-12-13T02:33:38.529Z] 4735.00 IOPS, 18.50 MiB/s [2024-12-13T02:33:39.466Z] 4758.57 IOPS, 18.59 MiB/s [2024-12-13T02:33:40.844Z] 4756.38 IOPS, 18.58 MiB/s [2024-12-13T02:33:41.781Z] 4733.11 IOPS, 18.49 MiB/s [2024-12-13T02:33:41.781Z] 4741.50 IOPS, 18.52 MiB/s 00:22:40.572 Latency(us) 00:22:40.572 [2024-12-13T02:33:41.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.572 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:40.572 Verification LBA range: start 0x0 length 0x2000 00:22:40.572 TLSTESTn1 : 10.02 4746.41 18.54 0.00 0.00 26925.40 6272.73 35451.86 00:22:40.572 [2024-12-13T02:33:41.781Z] =================================================================================================================== 00:22:40.572 [2024-12-13T02:33:41.781Z] Total : 4746.41 18.54 0.00 0.00 26925.40 6272.73 35451.86 00:22:40.572 { 00:22:40.572 "results": [ 00:22:40.572 { 00:22:40.572 "job": "TLSTESTn1", 00:22:40.572 "core_mask": "0x4", 00:22:40.572 "workload": "verify", 00:22:40.572 "status": "finished", 00:22:40.572 "verify_range": { 00:22:40.572 "start": 0, 00:22:40.572 "length": 8192 00:22:40.572 }, 00:22:40.572 "queue_depth": 128, 00:22:40.572 "io_size": 4096, 00:22:40.572 "runtime": 10.015992, 00:22:40.572 "iops": 4746.409541860657, 00:22:40.572 "mibps": 18.54066227289319, 00:22:40.572 "io_failed": 0, 00:22:40.572 "io_timeout": 0, 00:22:40.572 "avg_latency_us": 26925.398051325203, 00:22:40.572 "min_latency_us": 6272.731428571428, 00:22:40.572 "max_latency_us": 35451.85523809524 00:22:40.572 } 00:22:40.572 ], 00:22:40.572 "core_count": 1 00:22:40.572 } 00:22:40.572 03:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:40.572 03:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2707378 00:22:40.572 03:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2707378 ']' 00:22:40.572 03:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2707378 00:22:40.572 03:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:40.572 03:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:40.572 03:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2707378 00:22:40.572 03:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:40.572 03:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:40.572 03:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2707378' 00:22:40.572 killing process with pid 2707378 00:22:40.572 03:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2707378 00:22:40.572 Received shutdown signal, test time was about 10.000000 seconds 00:22:40.572 00:22:40.572 Latency(us) 00:22:40.572 [2024-12-13T02:33:41.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.572 [2024-12-13T02:33:41.781Z] =================================================================================================================== 00:22:40.572 [2024-12-13T02:33:41.781Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:40.572 03:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2707378 00:22:41.509 03:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yj9e9xSAzV 00:22:41.509 03:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:41.509 03:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yj9e9xSAzV 00:22:41.509 03:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:41.509 03:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:41.509 03:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:41.509 03:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:41.509 03:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yj9e9xSAzV 00:22:41.509 03:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:41.509 03:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:41.509 03:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:41.509 03:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yj9e9xSAzV 00:22:41.509 03:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:41.509 03:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2709378 00:22:41.509 03:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:41.509 03:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:41.509 03:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2709378 /var/tmp/bdevperf.sock 00:22:41.509 03:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2709378 ']' 00:22:41.509 03:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:41.509 03:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:41.509 03:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:41.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:41.509 03:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:41.509 03:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.509 [2024-12-13 03:33:42.515549] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:22:41.509 [2024-12-13 03:33:42.515639] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2709378 ] 00:22:41.509 [2024-12-13 03:33:42.623264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.768 [2024-12-13 03:33:42.734443] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:42.335 03:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:42.335 03:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:42.335 03:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yj9e9xSAzV 00:22:42.335 03:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:42.594 [2024-12-13 03:33:43.695527] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:42.594 [2024-12-13 03:33:43.708160] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:42.594 [2024-12-13 03:33:43.708231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (107): Transport endpoint is not connected 00:22:42.594 [2024-12-13 03:33:43.709214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:22:42.594 [2024-12-13 03:33:43.710208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:42.594 [2024-12-13 03:33:43.710232] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:42.594 [2024-12-13 03:33:43.710244] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:42.595 [2024-12-13 03:33:43.710259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:42.595 request: 00:22:42.595 { 00:22:42.595 "name": "TLSTEST", 00:22:42.595 "trtype": "tcp", 00:22:42.595 "traddr": "10.0.0.2", 00:22:42.595 "adrfam": "ipv4", 00:22:42.595 "trsvcid": "4420", 00:22:42.595 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:42.595 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:42.595 "prchk_reftag": false, 00:22:42.595 "prchk_guard": false, 00:22:42.595 "hdgst": false, 00:22:42.595 "ddgst": false, 00:22:42.595 "psk": "key0", 00:22:42.595 "allow_unrecognized_csi": false, 00:22:42.595 "method": "bdev_nvme_attach_controller", 00:22:42.595 "req_id": 1 00:22:42.595 } 00:22:42.595 Got JSON-RPC error response 00:22:42.595 response: 00:22:42.595 { 00:22:42.595 "code": -5, 00:22:42.595 "message": "Input/output error" 00:22:42.595 } 00:22:42.595 03:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2709378 00:22:42.595 03:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2709378 ']' 00:22:42.595 03:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2709378 00:22:42.595 03:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:42.595 03:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:42.595 03:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2709378 00:22:42.595 03:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:42.595 03:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:42.595 03:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2709378' 00:22:42.595 killing process with pid 2709378 00:22:42.595 03:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2709378 00:22:42.595 Received shutdown signal, test time was about 10.000000 seconds 00:22:42.595 00:22:42.595 Latency(us) 00:22:42.595 [2024-12-13T02:33:43.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.595 [2024-12-13T02:33:43.804Z] =================================================================================================================== 00:22:42.595 [2024-12-13T02:33:43.804Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:42.595 03:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2709378 00:22:43.532 03:33:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:43.532 03:33:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:43.532 03:33:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:43.532 03:33:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:43.532 03:33:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:43.532 03:33:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.0HwdhAqQGd 00:22:43.532 03:33:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:43.532 03:33:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.0HwdhAqQGd 00:22:43.532 03:33:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:43.532 03:33:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:43.532 03:33:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:43.532 03:33:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:43.532 03:33:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.0HwdhAqQGd 00:22:43.532 03:33:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:43.532 03:33:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:43.532 03:33:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:43.532 03:33:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.0HwdhAqQGd 00:22:43.532 03:33:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:43.532 03:33:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2709826 00:22:43.532 03:33:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:43.532 03:33:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:43.532 03:33:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2709826 /var/tmp/bdevperf.sock 00:22:43.532 03:33:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2709826 ']' 00:22:43.532 03:33:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:43.532 03:33:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:43.532 03:33:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:43.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:43.532 03:33:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:43.532 03:33:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.791 [2024-12-13 03:33:44.742453] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:22:43.791 [2024-12-13 03:33:44.742546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2709826 ] 00:22:43.791 [2024-12-13 03:33:44.849614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.791 [2024-12-13 03:33:44.959857] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:44.358 03:33:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:44.358 03:33:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:44.358 03:33:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0HwdhAqQGd 00:22:44.617 03:33:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:22:44.876 [2024-12-13 03:33:45.921949] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:44.876 [2024-12-13 03:33:45.933643] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:44.876 [2024-12-13 03:33:45.933669] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:44.876 [2024-12-13 03:33:45.933704] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:44.876 [2024-12-13 03:33:45.933843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (107): Transport endpoint is not connected 00:22:44.876 [2024-12-13 03:33:45.934824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:22:44.876 [2024-12-13 03:33:45.935827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:44.876 [2024-12-13 03:33:45.935850] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:44.876 [2024-12-13 03:33:45.935865] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:44.876 [2024-12-13 03:33:45.935880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:44.876 request: 00:22:44.876 { 00:22:44.876 "name": "TLSTEST", 00:22:44.876 "trtype": "tcp", 00:22:44.876 "traddr": "10.0.0.2", 00:22:44.876 "adrfam": "ipv4", 00:22:44.876 "trsvcid": "4420", 00:22:44.876 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.876 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:44.876 "prchk_reftag": false, 00:22:44.876 "prchk_guard": false, 00:22:44.876 "hdgst": false, 00:22:44.876 "ddgst": false, 00:22:44.876 "psk": "key0", 00:22:44.876 "allow_unrecognized_csi": false, 00:22:44.876 "method": "bdev_nvme_attach_controller", 00:22:44.876 "req_id": 1 00:22:44.876 } 00:22:44.876 Got JSON-RPC error response 00:22:44.876 response: 00:22:44.876 { 00:22:44.876 "code": -5, 00:22:44.876 "message": "Input/output error" 00:22:44.876 } 00:22:44.876 03:33:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2709826 00:22:44.876 03:33:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2709826 ']' 00:22:44.876 03:33:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2709826 00:22:44.876 03:33:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:44.876 03:33:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:44.876 03:33:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2709826 00:22:44.876 03:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:44.876 03:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:44.876 03:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2709826' 00:22:44.876 killing process with pid 2709826 00:22:44.876 03:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2709826 00:22:44.876 Received shutdown signal, test time was about 10.000000 seconds 00:22:44.876 00:22:44.876 Latency(us) 00:22:44.876 [2024-12-13T02:33:46.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.876 [2024-12-13T02:33:46.085Z] =================================================================================================================== 00:22:44.876 [2024-12-13T02:33:46.085Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:44.876 03:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2709826 00:22:45.812 03:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:45.812 03:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:45.812 03:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:45.812 03:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:45.812 03:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:45.812 03:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.0HwdhAqQGd 00:22:45.812 03:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:45.812 03:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.0HwdhAqQGd 00:22:45.812 03:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:45.812 03:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:45.812 03:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:45.813 03:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:45.813 03:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.0HwdhAqQGd 00:22:45.813 03:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:45.813 03:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:45.813 03:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:45.813 03:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.0HwdhAqQGd 00:22:45.813 03:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:45.813 03:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2710165 00:22:45.813 03:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:45.813 03:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:45.813 03:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2710165 /var/tmp/bdevperf.sock 00:22:45.813 03:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2710165 ']' 00:22:45.813 03:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:45.813 03:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:45.813 03:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:45.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:45.813 03:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:45.813 03:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.813 [2024-12-13 03:33:46.958577] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:22:45.813 [2024-12-13 03:33:46.958664] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2710165 ] 00:22:46.071 [2024-12-13 03:33:47.067793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.071 [2024-12-13 03:33:47.177388] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:46.637 03:33:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:46.637 03:33:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:46.637 03:33:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0HwdhAqQGd 00:22:46.895 03:33:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:47.153 [2024-12-13 03:33:48.117994] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:47.153 [2024-12-13 03:33:48.129310] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:47.153 [2024-12-13 03:33:48.129340] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:47.154 [2024-12-13 03:33:48.129372] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:47.154 [2024-12-13 03:33:48.130182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (107): Transport endpoint is not connected 00:22:47.154 [2024-12-13 03:33:48.131162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:22:47.154 [2024-12-13 03:33:48.132166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:22:47.154 [2024-12-13 03:33:48.132188] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:47.154 [2024-12-13 03:33:48.132203] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:22:47.154 [2024-12-13 03:33:48.132218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:22:47.154 request: 00:22:47.154 { 00:22:47.154 "name": "TLSTEST", 00:22:47.154 "trtype": "tcp", 00:22:47.154 "traddr": "10.0.0.2", 00:22:47.154 "adrfam": "ipv4", 00:22:47.154 "trsvcid": "4420", 00:22:47.154 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:47.154 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:47.154 "prchk_reftag": false, 00:22:47.154 "prchk_guard": false, 00:22:47.154 "hdgst": false, 00:22:47.154 "ddgst": false, 00:22:47.154 "psk": "key0", 00:22:47.154 "allow_unrecognized_csi": false, 00:22:47.154 "method": "bdev_nvme_attach_controller", 00:22:47.154 "req_id": 1 00:22:47.154 } 00:22:47.154 Got JSON-RPC error response 00:22:47.154 response: 00:22:47.154 { 00:22:47.154 "code": -5, 00:22:47.154 "message": "Input/output error" 00:22:47.154 } 00:22:47.154 03:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2710165 00:22:47.154 03:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2710165 ']' 00:22:47.154 03:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2710165 00:22:47.154 03:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:47.154 03:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:47.154 03:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2710165 00:22:47.154 03:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:47.154 03:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:47.154 03:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2710165' 00:22:47.154 killing process with pid 2710165 00:22:47.154 03:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2710165 00:22:47.154 Received shutdown signal, test time was about 10.000000 seconds 00:22:47.154 00:22:47.154 Latency(us) 00:22:47.154 [2024-12-13T02:33:48.363Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.154 [2024-12-13T02:33:48.363Z] =================================================================================================================== 00:22:47.154 [2024-12-13T02:33:48.363Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:47.154 03:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2710165 00:22:48.089 03:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:48.089 03:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:48.089 03:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:48.089 03:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:48.089 03:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:48.089 03:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:48.089 03:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:48.089 03:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:48.089 03:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:48.089 03:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:48.089 03:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:48.089 03:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:48.089 03:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:48.089 03:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:48.089 03:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:48.089 03:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:48.089 03:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:48.089 03:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:48.089 03:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2710523 00:22:48.089 03:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:48.089 03:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:48.089 03:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2710523 /var/tmp/bdevperf.sock 00:22:48.089 03:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2710523 ']' 00:22:48.089 03:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:48.089 03:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:48.089 03:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:48.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:48.089 03:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:48.089 03:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:48.089 [2024-12-13 03:33:49.156564] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:22:48.089 [2024-12-13 03:33:49.156651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2710523 ] 00:22:48.089 [2024-12-13 03:33:49.262899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.348 [2024-12-13 03:33:49.369934] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.914 03:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:48.914 03:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:48.914 03:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:22:49.173 [2024-12-13 03:33:50.142125] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:22:49.173 [2024-12-13 03:33:50.142162] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:49.173 request: 00:22:49.173 { 00:22:49.173 "name": "key0", 00:22:49.173 "path": "", 00:22:49.173 "method": "keyring_file_add_key", 00:22:49.173 "req_id": 1 00:22:49.173 } 00:22:49.173 Got JSON-RPC error response 00:22:49.173 response: 00:22:49.173 { 00:22:49.173 "code": -1, 00:22:49.173 "message": "Operation not permitted" 00:22:49.173 } 00:22:49.173 03:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:49.173 [2024-12-13 03:33:50.334746] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:49.173 [2024-12-13 03:33:50.334797] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:49.173 request: 00:22:49.173 { 00:22:49.173 "name": "TLSTEST", 00:22:49.173 "trtype": "tcp", 00:22:49.173 "traddr": "10.0.0.2", 00:22:49.173 "adrfam": "ipv4", 00:22:49.173 "trsvcid": "4420", 00:22:49.173 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:49.173 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:49.173 "prchk_reftag": false, 00:22:49.173 "prchk_guard": false, 00:22:49.173 "hdgst": false, 00:22:49.173 "ddgst": false, 00:22:49.173 "psk": "key0", 00:22:49.173 "allow_unrecognized_csi": false, 00:22:49.173 "method": "bdev_nvme_attach_controller", 00:22:49.173 "req_id": 1 00:22:49.173 } 00:22:49.173 Got JSON-RPC error response 00:22:49.173 response: 00:22:49.173 { 00:22:49.173 "code": -126, 00:22:49.173 "message": "Required key not available" 00:22:49.173 } 00:22:49.173 03:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2710523 00:22:49.173 03:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2710523 ']' 00:22:49.173 03:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2710523 00:22:49.173 03:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:49.173 03:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:49.173 03:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2710523 00:22:49.431 03:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:49.431 03:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:49.431 03:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2710523' 00:22:49.431 killing process with pid 2710523 00:22:49.431 03:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2710523 00:22:49.431 Received shutdown signal, test time was about 10.000000 seconds 00:22:49.431 00:22:49.431 Latency(us) 00:22:49.431 [2024-12-13T02:33:50.640Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:49.431 [2024-12-13T02:33:50.641Z] =================================================================================================================== 00:22:49.432 [2024-12-13T02:33:50.641Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:49.432 03:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2710523 00:22:50.366 03:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:50.366 03:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:50.366 03:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:50.366 03:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:50.366 03:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:50.366 03:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2704863 00:22:50.366 03:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2704863 ']' 00:22:50.366 03:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2704863 00:22:50.366 03:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:50.366 03:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:50.366 03:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2704863 00:22:50.366 03:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:50.367 03:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:50.367 03:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2704863' 00:22:50.367 killing process with pid 2704863 00:22:50.367 03:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2704863 00:22:50.367 03:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2704863 00:22:51.741 03:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:51.741 03:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:51.741 03:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:51.741 03:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:51.741 03:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:51.741 03:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:22:51.741 03:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:51.741 03:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:51.741 03:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:22:51.741 03:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.Ov21J81T58 00:22:51.741 03:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:51.741 03:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.Ov21J81T58 00:22:51.741 03:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:22:51.741 03:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:51.741 03:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:51.741 03:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.741 03:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2711200 00:22:51.741 03:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2711200 00:22:51.741 03:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:51.741 03:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2711200 ']' 00:22:51.741 03:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.741 03:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:51.741 03:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.741 03:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:51.741 03:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.741 [2024-12-13 03:33:52.721803] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:22:51.741 [2024-12-13 03:33:52.721893] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.741 [2024-12-13 03:33:52.837731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.741 [2024-12-13 03:33:52.938974] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.741 [2024-12-13 03:33:52.939019] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.741 [2024-12-13 03:33:52.939029] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.741 [2024-12-13 03:33:52.939055] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.741 [2024-12-13 03:33:52.939064] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.741 [2024-12-13 03:33:52.940455] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.308 03:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:52.308 03:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:52.308 03:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:52.309 03:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:52.309 03:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.567 03:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:52.567 03:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.Ov21J81T58 00:22:52.567 03:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Ov21J81T58 00:22:52.567 03:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:52.567 [2024-12-13 03:33:53.711813] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.567 03:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:52.824 03:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:53.082 [2024-12-13 03:33:54.080801] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:53.082 [2024-12-13 03:33:54.081083] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:53.082 03:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:53.340 malloc0 00:22:53.340 03:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:53.340 03:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Ov21J81T58 00:22:53.598 03:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:53.856 03:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ov21J81T58 00:22:53.856 03:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:53.856 03:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:53.856 03:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:53.856 03:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Ov21J81T58 00:22:53.857 03:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:53.857 03:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2711462 00:22:53.857 03:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:53.857 03:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2711462 /var/tmp/bdevperf.sock 00:22:53.857 03:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2711462 ']' 00:22:53.857 03:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:53.857 03:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:53.857 03:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:53.857 03:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:53.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:53.857 03:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:53.857 03:33:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.857 [2024-12-13 03:33:54.929233] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:22:53.857 [2024-12-13 03:33:54.929319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2711462 ] 00:22:53.857 [2024-12-13 03:33:55.035572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.114 [2024-12-13 03:33:55.144109] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:54.680 03:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:54.680 03:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:54.680 03:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ov21J81T58 00:22:54.939 03:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:54.939 [2024-12-13 03:33:56.061739] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:54.939 TLSTESTn1 00:22:55.197 03:33:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:55.197 Running I/O for 10 seconds... 00:22:57.063 4488.00 IOPS, 17.53 MiB/s [2024-12-13T02:33:59.646Z] 4557.50 IOPS, 17.80 MiB/s [2024-12-13T02:34:00.580Z] 4619.00 IOPS, 18.04 MiB/s [2024-12-13T02:34:01.515Z] 4598.50 IOPS, 17.96 MiB/s [2024-12-13T02:34:02.450Z] 4605.00 IOPS, 17.99 MiB/s [2024-12-13T02:34:03.385Z] 4619.17 IOPS, 18.04 MiB/s [2024-12-13T02:34:04.319Z] 4631.43 IOPS, 18.09 MiB/s [2024-12-13T02:34:05.271Z] 4623.88 IOPS, 18.06 MiB/s [2024-12-13T02:34:06.646Z] 4618.67 IOPS, 18.04 MiB/s [2024-12-13T02:34:06.646Z] 4623.60 IOPS, 18.06 MiB/s 00:23:05.437 Latency(us) 00:23:05.437 [2024-12-13T02:34:06.646Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.437 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:05.437 Verification LBA range: start 0x0 length 0x2000 00:23:05.437 TLSTESTn1 : 10.02 4628.70 18.08 0.00 0.00 27610.51 6553.60 26838.55 00:23:05.437 [2024-12-13T02:34:06.646Z] =================================================================================================================== 00:23:05.437 [2024-12-13T02:34:06.646Z] Total : 4628.70 18.08 0.00 0.00 27610.51 6553.60 26838.55 00:23:05.437 { 00:23:05.437 "results": [ 00:23:05.437 { 00:23:05.437 "job": "TLSTESTn1", 00:23:05.437 "core_mask": "0x4", 00:23:05.437 "workload": "verify", 00:23:05.437 "status": "finished", 00:23:05.437 "verify_range": { 00:23:05.437 "start": 0, 00:23:05.437 "length": 8192 00:23:05.437 }, 00:23:05.437 "queue_depth": 128, 00:23:05.437 "io_size": 4096, 00:23:05.437 "runtime": 10.016421, 00:23:05.437 "iops": 4628.699213022296, 00:23:05.437 "mibps": 18.080856300868344, 00:23:05.437 "io_failed": 0, 00:23:05.437 "io_timeout": 0, 00:23:05.437 "avg_latency_us": 27610.50763476212, 00:23:05.437 "min_latency_us": 6553.6, 00:23:05.437 "max_latency_us": 26838.55238095238 00:23:05.437 } 00:23:05.437 ], 00:23:05.437 "core_count": 1 00:23:05.437 } 00:23:05.437 03:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:05.437 03:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2711462 00:23:05.437 03:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2711462 ']' 00:23:05.437 03:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2711462 00:23:05.437 03:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:05.437 03:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:05.437 03:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2711462 00:23:05.437 03:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:05.437 03:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:05.437 03:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2711462' 00:23:05.437 killing process with pid 2711462 00:23:05.437 03:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2711462 00:23:05.437 Received shutdown signal, test time was about 10.000000 seconds 00:23:05.437 00:23:05.437 Latency(us) 00:23:05.437 [2024-12-13T02:34:06.646Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.437 [2024-12-13T02:34:06.646Z] =================================================================================================================== 00:23:05.437 [2024-12-13T02:34:06.646Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:05.437 03:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2711462 00:23:06.373 03:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.Ov21J81T58 00:23:06.373 03:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ov21J81T58 00:23:06.373 03:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:06.373 03:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ov21J81T58 00:23:06.373 03:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:06.373 03:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:06.373 03:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:06.373 03:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:06.373 03:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ov21J81T58 00:23:06.373 03:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:06.373 03:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:06.373 03:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:06.373 03:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Ov21J81T58 00:23:06.373 03:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:06.373 03:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2713469 00:23:06.373 03:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:06.373 03:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:06.373 03:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2713469 /var/tmp/bdevperf.sock 00:23:06.373 03:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2713469 ']' 00:23:06.373 03:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:06.373 03:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:06.373 03:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:06.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:06.373 03:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:06.373 03:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.373 [2024-12-13 03:34:07.355655] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:06.373 [2024-12-13 03:34:07.355749] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2713469 ] 00:23:06.373 [2024-12-13 03:34:07.463021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.373 [2024-12-13 03:34:07.574151] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.307 03:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.307 03:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:07.307 03:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ov21J81T58 00:23:07.307 [2024-12-13 03:34:08.339476] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Ov21J81T58': 0100666 00:23:07.307 [2024-12-13 03:34:08.339515] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:07.307 request: 00:23:07.307 { 00:23:07.307 "name": "key0", 00:23:07.307 "path": "/tmp/tmp.Ov21J81T58", 00:23:07.308 "method": "keyring_file_add_key", 00:23:07.308 "req_id": 1 00:23:07.308 } 00:23:07.308 Got JSON-RPC error response 00:23:07.308 response: 00:23:07.308 { 00:23:07.308 "code": -1, 00:23:07.308 "message": "Operation not permitted" 00:23:07.308 } 00:23:07.308 03:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:07.566 [2024-12-13 03:34:08.524072] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:07.566 [2024-12-13 03:34:08.524117] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:07.566 request: 00:23:07.566 { 00:23:07.566 "name": "TLSTEST", 00:23:07.566 "trtype": "tcp", 00:23:07.566 "traddr": "10.0.0.2", 00:23:07.566 "adrfam": "ipv4", 00:23:07.566 "trsvcid": "4420", 00:23:07.566 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.566 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:07.566 "prchk_reftag": false, 00:23:07.566 "prchk_guard": false, 00:23:07.566 "hdgst": false, 00:23:07.566 "ddgst": false, 00:23:07.566 "psk": "key0", 00:23:07.566 "allow_unrecognized_csi": false, 00:23:07.566 "method": "bdev_nvme_attach_controller", 00:23:07.566 "req_id": 1 00:23:07.566 } 00:23:07.566 Got JSON-RPC error response 00:23:07.566 response: 00:23:07.566 { 00:23:07.566 "code": -126, 00:23:07.566 "message": "Required key not available" 00:23:07.566 } 00:23:07.566 03:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2713469 00:23:07.566 03:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2713469 ']' 00:23:07.566 03:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2713469 00:23:07.566 03:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:07.566 03:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:07.566 03:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2713469 00:23:07.566 03:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:07.566 03:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:07.566 03:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2713469' 00:23:07.566 killing process with pid 2713469 00:23:07.566 03:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2713469 00:23:07.566 Received shutdown signal, test time was about 10.000000 seconds 00:23:07.566 00:23:07.566 Latency(us) 00:23:07.566 [2024-12-13T02:34:08.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.566 [2024-12-13T02:34:08.775Z] =================================================================================================================== 00:23:07.566 [2024-12-13T02:34:08.775Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:07.566 03:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2713469 00:23:08.586 03:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:08.586 03:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:08.586 03:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:08.586 03:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:08.586 03:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:08.586 03:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2711200 00:23:08.586 03:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2711200 ']' 00:23:08.586 03:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2711200 00:23:08.586 03:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:08.586 03:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:08.586 03:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2711200 00:23:08.586 03:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:08.586 03:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:08.586 03:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2711200' 00:23:08.586 killing process with pid 2711200 00:23:08.586 03:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2711200 00:23:08.586 03:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2711200 00:23:09.961 03:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:09.961 03:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:09.961 03:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:09.961 03:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.961 03:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2714084 00:23:09.961 03:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2714084 00:23:09.961 03:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:09.961 03:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2714084 ']' 00:23:09.961 03:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.961 03:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:09.961 03:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.961 03:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:09.961 03:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.961 [2024-12-13 03:34:10.850913] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:09.961 [2024-12-13 03:34:10.851017] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.961 [2024-12-13 03:34:10.967339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.961 [2024-12-13 03:34:11.069559] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.961 [2024-12-13 03:34:11.069602] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.961 [2024-12-13 03:34:11.069611] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.961 [2024-12-13 03:34:11.069621] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.961 [2024-12-13 03:34:11.069629] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.961 [2024-12-13 03:34:11.070998] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.527 03:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:10.527 03:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:10.528 03:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:10.528 03:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:10.528 03:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.528 03:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:10.528 03:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.Ov21J81T58 00:23:10.528 03:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:10.528 03:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Ov21J81T58 00:23:10.528 03:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:23:10.528 03:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:10.528 03:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:23:10.528 03:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:10.528 03:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.Ov21J81T58 00:23:10.528 03:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Ov21J81T58 00:23:10.528 03:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:10.786 [2024-12-13 03:34:11.863973] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.786 03:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:11.044 03:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:11.044 [2024-12-13 03:34:12.224910] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:11.044 [2024-12-13 03:34:12.225198] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.044 03:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:11.302 malloc0 00:23:11.302 03:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:11.560 03:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Ov21J81T58 00:23:11.818 [2024-12-13 03:34:12.777458] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Ov21J81T58': 0100666 00:23:11.818 [2024-12-13 03:34:12.777492] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:11.818 request: 00:23:11.818 { 00:23:11.818 "name": "key0", 00:23:11.818 "path": "/tmp/tmp.Ov21J81T58", 00:23:11.818 "method": "keyring_file_add_key", 00:23:11.818 "req_id": 1 00:23:11.818 } 00:23:11.818 Got JSON-RPC error response 00:23:11.818 response: 00:23:11.818 { 00:23:11.818 "code": -1, 00:23:11.818 "message": "Operation not permitted" 00:23:11.818 } 00:23:11.818 03:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:11.818 [2024-12-13 03:34:12.949930] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:11.818 [2024-12-13 03:34:12.949976] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:11.818 request: 00:23:11.818 { 00:23:11.818 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.818 "host": "nqn.2016-06.io.spdk:host1", 00:23:11.818 "psk": "key0", 00:23:11.818 "method": "nvmf_subsystem_add_host", 00:23:11.818 "req_id": 1 00:23:11.818 } 00:23:11.818 Got JSON-RPC error response 00:23:11.818 response: 00:23:11.818 { 00:23:11.818 "code": -32603, 00:23:11.818 "message": "Internal error" 00:23:11.818 } 00:23:11.818 03:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:11.818 03:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:11.818 03:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:11.818 03:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:11.818 03:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2714084 00:23:11.818 03:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2714084 ']' 00:23:11.818 03:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2714084 00:23:11.818 03:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:11.818 03:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:11.818 03:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2714084 00:23:11.818 03:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:11.818 03:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:11.818 03:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2714084' 00:23:11.818 killing process with pid 2714084 00:23:11.818 03:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2714084 00:23:11.818 03:34:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2714084 00:23:13.212 03:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.Ov21J81T58 00:23:13.212 03:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:13.212 03:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:13.212 03:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:13.212 03:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.212 03:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2714640 00:23:13.212 03:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:13.212 03:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2714640 00:23:13.212 03:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2714640 ']' 00:23:13.212 03:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.212 03:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:13.212 03:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.212 03:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:13.212 03:34:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.212 [2024-12-13 03:34:14.291153] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:13.212 [2024-12-13 03:34:14.291242] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.212 [2024-12-13 03:34:14.408324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.471 [2024-12-13 03:34:14.513101] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:13.471 [2024-12-13 03:34:14.513143] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:13.471 [2024-12-13 03:34:14.513153] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:13.471 [2024-12-13 03:34:14.513163] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:13.471 [2024-12-13 03:34:14.513170] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:13.471 [2024-12-13 03:34:14.514381] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.037 03:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:14.038 03:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:14.038 03:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:14.038 03:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:14.038 03:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.038 03:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.038 03:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.Ov21J81T58 00:23:14.038 03:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Ov21J81T58 00:23:14.038 03:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:14.295 [2024-12-13 03:34:15.295441] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.295 03:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:14.295 03:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:14.553 [2024-12-13 03:34:15.668447] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:14.553 [2024-12-13 03:34:15.668696] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.553 03:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:14.811 malloc0 00:23:14.811 03:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:15.068 03:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Ov21J81T58 00:23:15.326 03:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:15.326 03:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:15.327 03:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2715084 00:23:15.327 03:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:15.327 03:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2715084 /var/tmp/bdevperf.sock 00:23:15.327 03:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2715084 ']' 00:23:15.327 03:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:15.327 03:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:15.327 03:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:15.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:15.327 03:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:15.327 03:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.327 [2024-12-13 03:34:16.507192] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:15.327 [2024-12-13 03:34:16.507296] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2715084 ] 00:23:15.585 [2024-12-13 03:34:16.614647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.585 [2024-12-13 03:34:16.721143] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.151 03:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:16.151 03:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:16.151 03:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ov21J81T58 00:23:16.409 03:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:16.667 [2024-12-13 03:34:17.662059] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:16.667 TLSTESTn1 00:23:16.667 03:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:16.926 03:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:16.926 "subsystems": [ 00:23:16.926 { 00:23:16.926 "subsystem": "keyring", 00:23:16.926 "config": [ 00:23:16.926 { 00:23:16.926 "method": "keyring_file_add_key", 00:23:16.926 "params": { 00:23:16.926 "name": "key0", 00:23:16.926 "path": "/tmp/tmp.Ov21J81T58" 00:23:16.926 } 00:23:16.926 } 00:23:16.926 ] 00:23:16.926 }, 00:23:16.926 { 00:23:16.926 "subsystem": "iobuf", 00:23:16.926 "config": [ 00:23:16.926 { 00:23:16.926 "method": "iobuf_set_options", 00:23:16.926 "params": { 00:23:16.926 "small_pool_count": 8192, 00:23:16.926 "large_pool_count": 1024, 00:23:16.926 "small_bufsize": 8192, 00:23:16.926 "large_bufsize": 135168, 00:23:16.926 "enable_numa": false 00:23:16.926 } 00:23:16.926 } 00:23:16.926 ] 00:23:16.926 }, 00:23:16.926 { 00:23:16.926 "subsystem": "sock", 00:23:16.926 "config": [ 00:23:16.926 { 00:23:16.926 "method": "sock_set_default_impl", 00:23:16.926 "params": { 00:23:16.926 "impl_name": "posix" 00:23:16.926 } 00:23:16.926 }, 00:23:16.926 { 00:23:16.926 "method": "sock_impl_set_options", 00:23:16.926 "params": { 00:23:16.926 "impl_name": "ssl", 00:23:16.926 "recv_buf_size": 4096, 00:23:16.926 "send_buf_size": 4096, 00:23:16.926 "enable_recv_pipe": true, 00:23:16.926 "enable_quickack": false, 00:23:16.926 "enable_placement_id": 0, 00:23:16.926 "enable_zerocopy_send_server": true, 00:23:16.926 "enable_zerocopy_send_client": false, 00:23:16.926 "zerocopy_threshold": 0, 00:23:16.926 "tls_version": 0, 00:23:16.926 "enable_ktls": false 00:23:16.926 } 00:23:16.926 }, 00:23:16.926 { 00:23:16.926 "method": "sock_impl_set_options", 00:23:16.926 "params": { 00:23:16.926 "impl_name": "posix", 00:23:16.926 "recv_buf_size": 2097152, 00:23:16.926 "send_buf_size": 2097152, 00:23:16.926 "enable_recv_pipe": true, 00:23:16.926 "enable_quickack": false, 00:23:16.926 "enable_placement_id": 0, 00:23:16.926 "enable_zerocopy_send_server": true, 00:23:16.926 "enable_zerocopy_send_client": false, 00:23:16.926 "zerocopy_threshold": 0, 00:23:16.926 "tls_version": 0, 00:23:16.926 "enable_ktls": false 00:23:16.926 } 00:23:16.926 } 00:23:16.926 ] 00:23:16.926 }, 00:23:16.926 { 00:23:16.926 "subsystem": "vmd", 00:23:16.926 "config": [] 00:23:16.926 }, 00:23:16.926 { 00:23:16.926 "subsystem": "accel", 00:23:16.926 "config": [ 00:23:16.926 { 00:23:16.926 "method": "accel_set_options", 00:23:16.926 "params": { 00:23:16.926 "small_cache_size": 128, 00:23:16.926 "large_cache_size": 16, 00:23:16.926 "task_count": 2048, 00:23:16.926 "sequence_count": 2048, 00:23:16.926 "buf_count": 2048 00:23:16.926 } 00:23:16.926 } 00:23:16.926 ] 00:23:16.926 }, 00:23:16.926 { 00:23:16.926 "subsystem": "bdev", 00:23:16.926 "config": [ 00:23:16.926 { 00:23:16.926 "method": "bdev_set_options", 00:23:16.926 "params": { 00:23:16.926 "bdev_io_pool_size": 65535, 00:23:16.926 "bdev_io_cache_size": 256, 00:23:16.926 "bdev_auto_examine": true, 00:23:16.926 "iobuf_small_cache_size": 128, 00:23:16.926 "iobuf_large_cache_size": 16 00:23:16.926 } 00:23:16.926 }, 00:23:16.926 { 00:23:16.926 "method": "bdev_raid_set_options", 00:23:16.926 "params": { 00:23:16.926 "process_window_size_kb": 1024, 00:23:16.926 "process_max_bandwidth_mb_sec": 0 00:23:16.926 } 00:23:16.926 }, 00:23:16.926 { 00:23:16.926 "method": "bdev_iscsi_set_options", 00:23:16.926 "params": { 00:23:16.926 "timeout_sec": 30 00:23:16.926 } 00:23:16.926 }, 00:23:16.926 { 00:23:16.926 "method": "bdev_nvme_set_options", 00:23:16.926 "params": { 00:23:16.926 "action_on_timeout": "none", 00:23:16.926 "timeout_us": 0, 00:23:16.926 "timeout_admin_us": 0, 00:23:16.926 "keep_alive_timeout_ms": 10000, 00:23:16.926 "arbitration_burst": 0, 00:23:16.926 "low_priority_weight": 0, 00:23:16.926 "medium_priority_weight": 0, 00:23:16.926 "high_priority_weight": 0, 00:23:16.926 "nvme_adminq_poll_period_us": 10000, 00:23:16.926 "nvme_ioq_poll_period_us": 0, 00:23:16.926 "io_queue_requests": 0, 00:23:16.926 "delay_cmd_submit": true, 00:23:16.926 "transport_retry_count": 4, 00:23:16.926 "bdev_retry_count": 3, 00:23:16.926 "transport_ack_timeout": 0, 00:23:16.926 "ctrlr_loss_timeout_sec": 0, 00:23:16.926 "reconnect_delay_sec": 0, 00:23:16.926 "fast_io_fail_timeout_sec": 0, 00:23:16.926 "disable_auto_failback": false, 00:23:16.926 "generate_uuids": false, 00:23:16.926 "transport_tos": 0, 00:23:16.926 "nvme_error_stat": false, 00:23:16.926 "rdma_srq_size": 0, 00:23:16.926 "io_path_stat": false, 00:23:16.926 "allow_accel_sequence": false, 00:23:16.926 "rdma_max_cq_size": 0, 00:23:16.926 "rdma_cm_event_timeout_ms": 0, 00:23:16.926 "dhchap_digests": [ 00:23:16.926 "sha256", 00:23:16.926 "sha384", 00:23:16.926 "sha512" 00:23:16.926 ], 00:23:16.926 "dhchap_dhgroups": [ 00:23:16.926 "null", 00:23:16.926 "ffdhe2048", 00:23:16.926 "ffdhe3072", 00:23:16.926 "ffdhe4096", 00:23:16.926 "ffdhe6144", 00:23:16.926 "ffdhe8192" 00:23:16.926 ], 00:23:16.926 "rdma_umr_per_io": false 00:23:16.926 } 00:23:16.926 }, 00:23:16.926 { 00:23:16.926 "method": "bdev_nvme_set_hotplug", 00:23:16.926 "params": { 00:23:16.926 "period_us": 100000, 00:23:16.926 "enable": false 00:23:16.926 } 00:23:16.926 }, 00:23:16.926 { 00:23:16.926 "method": "bdev_malloc_create", 00:23:16.926 "params": { 00:23:16.926 "name": "malloc0", 00:23:16.926 "num_blocks": 8192, 00:23:16.926 "block_size": 4096, 00:23:16.927 "physical_block_size": 4096, 00:23:16.927 "uuid": "7a32425e-af06-4c03-bbcc-f39c96375bd4", 00:23:16.927 "optimal_io_boundary": 0, 00:23:16.927 "md_size": 0, 00:23:16.927 "dif_type": 0, 00:23:16.927 "dif_is_head_of_md": false, 00:23:16.927 "dif_pi_format": 0 00:23:16.927 } 00:23:16.927 }, 00:23:16.927 { 00:23:16.927 "method": "bdev_wait_for_examine" 00:23:16.927 } 00:23:16.927 ] 00:23:16.927 }, 00:23:16.927 { 00:23:16.927 "subsystem": "nbd", 00:23:16.927 "config": [] 00:23:16.927 }, 00:23:16.927 { 00:23:16.927 "subsystem": "scheduler", 00:23:16.927 "config": [ 00:23:16.927 { 00:23:16.927 "method": "framework_set_scheduler", 00:23:16.927 "params": { 00:23:16.927 "name": "static" 00:23:16.927 } 00:23:16.927 } 00:23:16.927 ] 00:23:16.927 }, 00:23:16.927 { 00:23:16.927 "subsystem": "nvmf", 00:23:16.927 "config": [ 00:23:16.927 { 00:23:16.927 "method": "nvmf_set_config", 00:23:16.927 "params": { 00:23:16.927 "discovery_filter": "match_any", 00:23:16.927 "admin_cmd_passthru": { 00:23:16.927 "identify_ctrlr": false 00:23:16.927 }, 00:23:16.927 "dhchap_digests": [ 00:23:16.927 "sha256", 00:23:16.927 "sha384", 00:23:16.927 "sha512" 00:23:16.927 ], 00:23:16.927 "dhchap_dhgroups": [ 00:23:16.927 "null", 00:23:16.927 "ffdhe2048", 00:23:16.927 "ffdhe3072", 00:23:16.927 "ffdhe4096", 00:23:16.927 "ffdhe6144", 00:23:16.927 "ffdhe8192" 00:23:16.927 ] 00:23:16.927 } 00:23:16.927 }, 00:23:16.927 { 00:23:16.927 "method": "nvmf_set_max_subsystems", 00:23:16.927 "params": { 00:23:16.927 "max_subsystems": 1024 00:23:16.927 } 00:23:16.927 }, 00:23:16.927 { 00:23:16.927 "method": "nvmf_set_crdt", 00:23:16.927 "params": { 00:23:16.927 "crdt1": 0, 00:23:16.927 "crdt2": 0, 00:23:16.927 "crdt3": 0 00:23:16.927 } 00:23:16.927 }, 00:23:16.927 { 00:23:16.927 "method": "nvmf_create_transport", 00:23:16.927 "params": { 00:23:16.927 "trtype": "TCP", 00:23:16.927 "max_queue_depth": 128, 00:23:16.927 "max_io_qpairs_per_ctrlr": 127, 00:23:16.927 "in_capsule_data_size": 4096, 00:23:16.927 "max_io_size": 131072, 00:23:16.927 "io_unit_size": 131072, 00:23:16.927 "max_aq_depth": 128, 00:23:16.927 "num_shared_buffers": 511, 00:23:16.927 "buf_cache_size": 4294967295, 00:23:16.927 "dif_insert_or_strip": false, 00:23:16.927 "zcopy": false, 00:23:16.927 "c2h_success": false, 00:23:16.927 "sock_priority": 0, 00:23:16.927 "abort_timeout_sec": 1, 00:23:16.927 "ack_timeout": 0, 00:23:16.927 "data_wr_pool_size": 0 00:23:16.927 } 00:23:16.927 }, 00:23:16.927 { 00:23:16.927 "method": "nvmf_create_subsystem", 00:23:16.927 "params": { 00:23:16.927 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.927 "allow_any_host": false, 00:23:16.927 "serial_number": "SPDK00000000000001", 00:23:16.927 "model_number": "SPDK bdev Controller", 00:23:16.927 "max_namespaces": 10, 00:23:16.927 "min_cntlid": 1, 00:23:16.927 "max_cntlid": 65519, 00:23:16.927 "ana_reporting": false 00:23:16.927 } 00:23:16.927 }, 00:23:16.927 { 00:23:16.927 "method": "nvmf_subsystem_add_host", 00:23:16.927 "params": { 00:23:16.927 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.927 "host": "nqn.2016-06.io.spdk:host1", 00:23:16.927 "psk": "key0" 00:23:16.927 } 00:23:16.927 }, 00:23:16.927 { 00:23:16.927 "method": "nvmf_subsystem_add_ns", 00:23:16.927 "params": { 00:23:16.927 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.927 "namespace": { 00:23:16.927 "nsid": 1, 00:23:16.927 "bdev_name": "malloc0", 00:23:16.927 "nguid": "7A32425EAF064C03BBCCF39C96375BD4", 00:23:16.927 "uuid": "7a32425e-af06-4c03-bbcc-f39c96375bd4", 00:23:16.927 "no_auto_visible": false 00:23:16.927 } 00:23:16.927 } 00:23:16.927 }, 00:23:16.927 { 00:23:16.927 "method": "nvmf_subsystem_add_listener", 00:23:16.927 "params": { 00:23:16.927 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.927 "listen_address": { 00:23:16.927 "trtype": "TCP", 00:23:16.927 "adrfam": "IPv4", 00:23:16.927 "traddr": "10.0.0.2", 00:23:16.927 "trsvcid": "4420" 00:23:16.927 }, 00:23:16.927 "secure_channel": true 00:23:16.927 } 00:23:16.927 } 00:23:16.927 ] 00:23:16.927 } 00:23:16.927 ] 00:23:16.927 }' 00:23:16.927 03:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:17.186 03:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:17.186 "subsystems": [ 00:23:17.186 { 00:23:17.186 "subsystem": "keyring", 00:23:17.186 "config": [ 00:23:17.186 { 00:23:17.186 "method": "keyring_file_add_key", 00:23:17.186 "params": { 00:23:17.186 "name": "key0", 00:23:17.186 "path": "/tmp/tmp.Ov21J81T58" 00:23:17.186 } 00:23:17.186 } 00:23:17.186 ] 00:23:17.186 }, 00:23:17.186 { 00:23:17.186 "subsystem": "iobuf", 00:23:17.186 "config": [ 00:23:17.186 { 00:23:17.186 "method": "iobuf_set_options", 00:23:17.186 "params": { 00:23:17.186 "small_pool_count": 8192, 00:23:17.186 "large_pool_count": 1024, 00:23:17.186 "small_bufsize": 8192, 00:23:17.186 "large_bufsize": 135168, 00:23:17.186 "enable_numa": false 00:23:17.186 } 00:23:17.186 } 00:23:17.186 ] 00:23:17.186 }, 00:23:17.186 { 00:23:17.186 "subsystem": "sock", 00:23:17.186 "config": [ 00:23:17.186 { 00:23:17.186 "method": "sock_set_default_impl", 00:23:17.186 "params": { 00:23:17.186 "impl_name": "posix" 00:23:17.186 } 00:23:17.186 }, 00:23:17.186 { 00:23:17.186 "method": "sock_impl_set_options", 00:23:17.186 "params": { 00:23:17.186 "impl_name": "ssl", 00:23:17.186 "recv_buf_size": 4096, 00:23:17.186 "send_buf_size": 4096, 00:23:17.186 "enable_recv_pipe": true, 00:23:17.186 "enable_quickack": false, 00:23:17.186 "enable_placement_id": 0, 00:23:17.186 "enable_zerocopy_send_server": true, 00:23:17.186 "enable_zerocopy_send_client": false, 00:23:17.186 "zerocopy_threshold": 0, 00:23:17.186 "tls_version": 0, 00:23:17.186 "enable_ktls": false 00:23:17.186 } 00:23:17.186 }, 00:23:17.186 { 00:23:17.186 "method": "sock_impl_set_options", 00:23:17.186 "params": { 00:23:17.186 "impl_name": "posix", 00:23:17.186 "recv_buf_size": 2097152, 00:23:17.186 "send_buf_size": 2097152, 00:23:17.186 "enable_recv_pipe": true, 00:23:17.186 "enable_quickack": false, 00:23:17.186 "enable_placement_id": 0, 00:23:17.186 "enable_zerocopy_send_server": true, 00:23:17.186 "enable_zerocopy_send_client": false, 00:23:17.186 "zerocopy_threshold": 0, 00:23:17.186 "tls_version": 0, 00:23:17.186 "enable_ktls": false 00:23:17.186 } 00:23:17.186 } 00:23:17.186 ] 00:23:17.186 }, 00:23:17.186 { 00:23:17.186 "subsystem": "vmd", 00:23:17.186 "config": [] 00:23:17.186 }, 00:23:17.186 { 00:23:17.186 "subsystem": "accel", 00:23:17.186 "config": [ 00:23:17.186 { 00:23:17.186 "method": "accel_set_options", 00:23:17.186 "params": { 00:23:17.186 "small_cache_size": 128, 00:23:17.186 "large_cache_size": 16, 00:23:17.186 "task_count": 2048, 00:23:17.186 "sequence_count": 2048, 00:23:17.186 "buf_count": 2048 00:23:17.186 } 00:23:17.186 } 00:23:17.186 ] 00:23:17.186 }, 00:23:17.186 { 00:23:17.186 "subsystem": "bdev", 00:23:17.186 "config": [ 00:23:17.186 { 00:23:17.186 "method": "bdev_set_options", 00:23:17.186 "params": { 00:23:17.186 "bdev_io_pool_size": 65535, 00:23:17.186 "bdev_io_cache_size": 256, 00:23:17.186 "bdev_auto_examine": true, 00:23:17.186 "iobuf_small_cache_size": 128, 00:23:17.186 "iobuf_large_cache_size": 16 00:23:17.186 } 00:23:17.186 }, 00:23:17.186 { 00:23:17.186 "method": "bdev_raid_set_options", 00:23:17.186 "params": { 00:23:17.186 "process_window_size_kb": 1024, 00:23:17.186 "process_max_bandwidth_mb_sec": 0 00:23:17.186 } 00:23:17.186 }, 00:23:17.186 { 00:23:17.186 "method": "bdev_iscsi_set_options", 00:23:17.186 "params": { 00:23:17.186 "timeout_sec": 30 00:23:17.186 } 00:23:17.186 }, 00:23:17.186 { 00:23:17.186 "method": "bdev_nvme_set_options", 00:23:17.186 "params": { 00:23:17.186 "action_on_timeout": "none", 00:23:17.186 "timeout_us": 0, 00:23:17.186 "timeout_admin_us": 0, 00:23:17.186 "keep_alive_timeout_ms": 10000, 00:23:17.186 "arbitration_burst": 0, 00:23:17.186 "low_priority_weight": 0, 00:23:17.186 "medium_priority_weight": 0, 00:23:17.186 "high_priority_weight": 0, 00:23:17.186 "nvme_adminq_poll_period_us": 10000, 00:23:17.186 "nvme_ioq_poll_period_us": 0, 00:23:17.186 "io_queue_requests": 512, 00:23:17.186 "delay_cmd_submit": true, 00:23:17.186 "transport_retry_count": 4, 00:23:17.186 "bdev_retry_count": 3, 00:23:17.186 "transport_ack_timeout": 0, 00:23:17.186 "ctrlr_loss_timeout_sec": 0, 00:23:17.186 "reconnect_delay_sec": 0, 00:23:17.186 "fast_io_fail_timeout_sec": 0, 00:23:17.186 "disable_auto_failback": false, 00:23:17.186 "generate_uuids": false, 00:23:17.186 "transport_tos": 0, 00:23:17.186 "nvme_error_stat": false, 00:23:17.186 "rdma_srq_size": 0, 00:23:17.186 "io_path_stat": false, 00:23:17.186 "allow_accel_sequence": false, 00:23:17.186 "rdma_max_cq_size": 0, 00:23:17.186 "rdma_cm_event_timeout_ms": 0, 00:23:17.186 "dhchap_digests": [ 00:23:17.186 "sha256", 00:23:17.187 "sha384", 00:23:17.187 "sha512" 00:23:17.187 ], 00:23:17.187 "dhchap_dhgroups": [ 00:23:17.187 "null", 00:23:17.187 "ffdhe2048", 00:23:17.187 "ffdhe3072", 00:23:17.187 "ffdhe4096", 00:23:17.187 "ffdhe6144", 00:23:17.187 "ffdhe8192" 00:23:17.187 ], 00:23:17.187 "rdma_umr_per_io": false 00:23:17.187 } 00:23:17.187 }, 00:23:17.187 { 00:23:17.187 "method": "bdev_nvme_attach_controller", 00:23:17.187 "params": { 00:23:17.187 "name": "TLSTEST", 00:23:17.187 "trtype": "TCP", 00:23:17.187 "adrfam": "IPv4", 00:23:17.187 "traddr": "10.0.0.2", 00:23:17.187 "trsvcid": "4420", 00:23:17.187 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.187 "prchk_reftag": false, 00:23:17.187 "prchk_guard": false, 00:23:17.187 "ctrlr_loss_timeout_sec": 0, 00:23:17.187 "reconnect_delay_sec": 0, 00:23:17.187 "fast_io_fail_timeout_sec": 0, 00:23:17.187 "psk": "key0", 00:23:17.187 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:17.187 "hdgst": false, 00:23:17.187 "ddgst": false, 00:23:17.187 "multipath": "multipath" 00:23:17.187 } 00:23:17.187 }, 00:23:17.187 { 00:23:17.187 "method": "bdev_nvme_set_hotplug", 00:23:17.187 "params": { 00:23:17.187 "period_us": 100000, 00:23:17.187 "enable": false 00:23:17.187 } 00:23:17.187 }, 00:23:17.187 { 00:23:17.187 "method": "bdev_wait_for_examine" 00:23:17.187 } 00:23:17.187 ] 00:23:17.187 }, 00:23:17.187 { 00:23:17.187 "subsystem": "nbd", 00:23:17.187 "config": [] 00:23:17.187 } 00:23:17.187 ] 00:23:17.187 }' 00:23:17.187 03:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2715084 00:23:17.187 03:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2715084 ']' 00:23:17.187 03:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2715084 00:23:17.187 03:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:17.187 03:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:17.187 03:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2715084 00:23:17.187 03:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:17.187 03:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:17.187 03:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2715084' 00:23:17.187 killing process with pid 2715084 00:23:17.187 03:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2715084 00:23:17.187 Received shutdown signal, test time was about 10.000000 seconds 00:23:17.187 00:23:17.187 Latency(us) 00:23:17.187 [2024-12-13T02:34:18.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.187 [2024-12-13T02:34:18.396Z] =================================================================================================================== 00:23:17.187 [2024-12-13T02:34:18.396Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:17.187 03:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2715084 00:23:18.122 03:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2714640 00:23:18.122 03:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2714640 ']' 00:23:18.122 03:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2714640 00:23:18.122 03:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:18.122 03:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:18.122 03:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2714640 00:23:18.122 03:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:18.122 03:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:18.122 03:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2714640' 00:23:18.122 killing process with pid 2714640 00:23:18.122 03:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2714640 00:23:18.122 03:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2714640 00:23:19.497 03:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:19.497 03:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:19.497 03:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:19.497 03:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:19.497 "subsystems": [ 00:23:19.497 { 00:23:19.497 "subsystem": "keyring", 00:23:19.497 "config": [ 00:23:19.497 { 00:23:19.497 "method": "keyring_file_add_key", 00:23:19.497 "params": { 00:23:19.497 "name": "key0", 00:23:19.497 "path": "/tmp/tmp.Ov21J81T58" 00:23:19.497 } 00:23:19.497 } 00:23:19.497 ] 00:23:19.497 }, 00:23:19.497 { 00:23:19.497 "subsystem": "iobuf", 00:23:19.497 "config": [ 00:23:19.497 { 00:23:19.497 "method": "iobuf_set_options", 00:23:19.497 "params": { 00:23:19.497 "small_pool_count": 8192, 00:23:19.497 "large_pool_count": 1024, 00:23:19.497 "small_bufsize": 8192, 00:23:19.497 "large_bufsize": 135168, 00:23:19.497 "enable_numa": false 00:23:19.497 } 00:23:19.497 } 00:23:19.497 ] 00:23:19.497 }, 00:23:19.497 { 00:23:19.497 "subsystem": "sock", 00:23:19.497 "config": [ 00:23:19.497 { 00:23:19.497 "method": "sock_set_default_impl", 00:23:19.497 "params": { 00:23:19.497 "impl_name": "posix" 00:23:19.497 } 00:23:19.497 }, 00:23:19.497 { 00:23:19.497 "method": "sock_impl_set_options", 00:23:19.497 "params": { 00:23:19.497 "impl_name": "ssl", 00:23:19.497 "recv_buf_size": 4096, 00:23:19.497 "send_buf_size": 4096, 00:23:19.497 "enable_recv_pipe": true, 00:23:19.497 "enable_quickack": false, 00:23:19.497 "enable_placement_id": 0, 00:23:19.497 "enable_zerocopy_send_server": true, 00:23:19.497 "enable_zerocopy_send_client": false, 00:23:19.497 "zerocopy_threshold": 0, 00:23:19.497 "tls_version": 0, 00:23:19.497 "enable_ktls": false 00:23:19.497 } 00:23:19.497 }, 00:23:19.497 { 00:23:19.497 "method": "sock_impl_set_options", 00:23:19.497 "params": { 00:23:19.497 "impl_name": "posix", 00:23:19.497 "recv_buf_size": 2097152, 00:23:19.497 "send_buf_size": 2097152, 00:23:19.497 "enable_recv_pipe": true, 00:23:19.497 "enable_quickack": false, 00:23:19.497 "enable_placement_id": 0, 00:23:19.497 "enable_zerocopy_send_server": true, 00:23:19.497 "enable_zerocopy_send_client": false, 00:23:19.497 "zerocopy_threshold": 0, 00:23:19.497 "tls_version": 0, 00:23:19.497 "enable_ktls": false 00:23:19.497 } 00:23:19.497 } 00:23:19.497 ] 00:23:19.497 }, 00:23:19.497 { 00:23:19.497 "subsystem": "vmd", 00:23:19.497 "config": [] 00:23:19.497 }, 00:23:19.497 { 00:23:19.497 "subsystem": "accel", 00:23:19.497 "config": [ 00:23:19.497 { 00:23:19.497 "method": "accel_set_options", 00:23:19.497 "params": { 00:23:19.497 "small_cache_size": 128, 00:23:19.497 "large_cache_size": 16, 00:23:19.497 "task_count": 2048, 00:23:19.497 "sequence_count": 2048, 00:23:19.497 "buf_count": 2048 00:23:19.497 } 00:23:19.497 } 00:23:19.497 ] 00:23:19.497 }, 00:23:19.497 { 00:23:19.497 "subsystem": "bdev", 00:23:19.497 "config": [ 00:23:19.497 { 00:23:19.497 "method": "bdev_set_options", 00:23:19.497 "params": { 00:23:19.497 "bdev_io_pool_size": 65535, 00:23:19.497 "bdev_io_cache_size": 256, 00:23:19.497 "bdev_auto_examine": true, 00:23:19.497 "iobuf_small_cache_size": 128, 00:23:19.497 "iobuf_large_cache_size": 16 00:23:19.497 } 00:23:19.497 }, 00:23:19.497 { 00:23:19.497 "method": "bdev_raid_set_options", 00:23:19.497 "params": { 00:23:19.497 "process_window_size_kb": 1024, 00:23:19.497 "process_max_bandwidth_mb_sec": 0 00:23:19.497 } 00:23:19.497 }, 00:23:19.497 { 00:23:19.497 "method": "bdev_iscsi_set_options", 00:23:19.497 "params": { 00:23:19.497 "timeout_sec": 30 00:23:19.498 } 00:23:19.498 }, 00:23:19.498 { 00:23:19.498 "method": "bdev_nvme_set_options", 00:23:19.498 "params": { 00:23:19.498 "action_on_timeout": "none", 00:23:19.498 "timeout_us": 0, 00:23:19.498 "timeout_admin_us": 0, 00:23:19.498 "keep_alive_timeout_ms": 10000, 00:23:19.498 "arbitration_burst": 0, 00:23:19.498 "low_priority_weight": 0, 00:23:19.498 "medium_priority_weight": 0, 00:23:19.498 "high_priority_weight": 0, 00:23:19.498 "nvme_adminq_poll_period_us": 10000, 00:23:19.498 "nvme_ioq_poll_period_us": 0, 00:23:19.498 "io_queue_requests": 0, 00:23:19.498 "delay_cmd_submit": true, 00:23:19.498 "transport_retry_count": 4, 00:23:19.498 "bdev_retry_count": 3, 00:23:19.498 "transport_ack_timeout": 0, 00:23:19.498 "ctrlr_loss_timeout_sec": 0, 00:23:19.498 "reconnect_delay_sec": 0, 00:23:19.498 "fast_io_fail_timeout_sec": 0, 00:23:19.498 "disable_auto_failback": false, 00:23:19.498 "generate_uuids": false, 00:23:19.498 "transport_tos": 0, 00:23:19.498 "nvme_error_stat": false, 00:23:19.498 "rdma_srq_size": 0, 00:23:19.498 "io_path_stat": false, 00:23:19.498 "allow_accel_sequence": false, 00:23:19.498 "rdma_max_cq_size": 0, 00:23:19.498 "rdma_cm_event_timeout_ms": 0, 00:23:19.498 "dhchap_digests": [ 00:23:19.498 "sha256", 00:23:19.498 "sha384", 00:23:19.498 "sha512" 00:23:19.498 ], 00:23:19.498 "dhchap_dhgroups": [ 00:23:19.498 "null", 00:23:19.498 "ffdhe2048", 00:23:19.498 "ffdhe3072", 00:23:19.498 "ffdhe4096", 00:23:19.498 "ffdhe6144", 00:23:19.498 "ffdhe8192" 00:23:19.498 ], 00:23:19.498 "rdma_umr_per_io": false 00:23:19.498 } 00:23:19.498 }, 00:23:19.498 { 00:23:19.498 "method": "bdev_nvme_set_hotplug", 00:23:19.498 "params": { 00:23:19.498 "period_us": 100000, 00:23:19.498 "enable": false 00:23:19.498 } 00:23:19.498 }, 00:23:19.498 { 00:23:19.498 "method": "bdev_malloc_create", 00:23:19.498 "params": { 00:23:19.498 "name": "malloc0", 00:23:19.498 "num_blocks": 8192, 00:23:19.498 "block_size": 4096, 00:23:19.498 "physical_block_size": 4096, 00:23:19.498 "uuid": "7a32425e-af06-4c03-bbcc-f39c96375bd4", 00:23:19.498 "optimal_io_boundary": 0, 00:23:19.498 "md_size": 0, 00:23:19.498 "dif_type": 0, 00:23:19.498 "dif_is_head_of_md": false, 00:23:19.498 "dif_pi_format": 0 00:23:19.498 } 00:23:19.498 }, 00:23:19.498 { 00:23:19.498 "method": "bdev_wait_for_examine" 00:23:19.498 } 00:23:19.498 ] 00:23:19.498 }, 00:23:19.498 { 00:23:19.498 "subsystem": "nbd", 00:23:19.498 "config": [] 00:23:19.498 }, 00:23:19.498 { 00:23:19.498 "subsystem": "scheduler", 00:23:19.498 "config": [ 00:23:19.498 { 00:23:19.498 "method": "framework_set_scheduler", 00:23:19.498 "params": { 00:23:19.498 "name": "static" 00:23:19.498 } 00:23:19.498 } 00:23:19.498 ] 00:23:19.498 }, 00:23:19.498 { 00:23:19.498 "subsystem": "nvmf", 00:23:19.498 "config": [ 00:23:19.498 { 00:23:19.498 "method": "nvmf_set_config", 00:23:19.498 "params": { 00:23:19.498 "discovery_filter": "match_any", 00:23:19.498 "admin_cmd_passthru": { 00:23:19.498 "identify_ctrlr": false 00:23:19.498 }, 00:23:19.498 "dhchap_digests": [ 00:23:19.498 "sha256", 00:23:19.498 "sha384", 00:23:19.498 "sha512" 00:23:19.498 ], 00:23:19.498 "dhchap_dhgroups": [ 00:23:19.498 "null", 00:23:19.498 "ffdhe2048", 00:23:19.498 "ffdhe3072", 00:23:19.498 "ffdhe4096", 00:23:19.498 "ffdhe6144", 00:23:19.498 "ffdhe8192" 00:23:19.498 ] 00:23:19.498 } 00:23:19.498 }, 00:23:19.498 { 00:23:19.498 "method": "nvmf_set_max_subsystems", 00:23:19.498 "params": { 00:23:19.498 "max_subsystems": 1024 00:23:19.498 } 00:23:19.498 }, 00:23:19.498 { 00:23:19.498 "method": "nvmf_set_crdt", 00:23:19.498 "params": { 00:23:19.498 "crdt1": 0, 00:23:19.498 "crdt2": 0, 00:23:19.498 "crdt3": 0 00:23:19.498 } 00:23:19.498 }, 00:23:19.498 { 00:23:19.498 "method": "nvmf_create_transport", 00:23:19.498 "params": { 00:23:19.498 "trtype": "TCP", 00:23:19.498 "max_queue_depth": 128, 00:23:19.498 "max_io_qpairs_per_ctrlr": 127, 00:23:19.498 "in_capsule_data_size": 4096, 00:23:19.498 "max_io_size": 131072, 00:23:19.498 "io_unit_size": 131072, 00:23:19.498 "max_aq_depth": 128, 00:23:19.498 "num_shared_buffers": 511, 00:23:19.498 "buf_cache_size": 4294967295, 00:23:19.498 "dif_insert_or_strip": false, 00:23:19.498 "zcopy": false, 00:23:19.498 "c2h_success": false, 00:23:19.498 "sock_priority": 0, 00:23:19.498 "abort_timeout_sec": 1, 00:23:19.498 "ack_timeout": 0, 00:23:19.498 "data_wr_pool_size": 0 00:23:19.498 } 00:23:19.498 }, 00:23:19.498 { 00:23:19.498 "method": "nvmf_create_subsystem", 00:23:19.498 "params": { 00:23:19.498 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.498 "allow_any_host": false, 00:23:19.498 "serial_number": "SPDK00000000000001", 00:23:19.498 "model_number": "SPDK bdev Controller", 00:23:19.498 "max_namespaces": 10, 00:23:19.498 "min_cntlid": 1, 00:23:19.498 "max_cntlid": 65519, 00:23:19.498 "ana_reporting": false 00:23:19.498 } 00:23:19.498 }, 00:23:19.498 { 00:23:19.498 "method": "nvmf_subsystem_add_host", 00:23:19.498 "params": { 00:23:19.498 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.498 "host": "nqn.2016-06.io.spdk:host1", 00:23:19.498 "psk": "key0" 00:23:19.498 } 00:23:19.498 }, 00:23:19.498 { 00:23:19.498 "method": "nvmf_subsystem_add_ns", 00:23:19.498 "params": { 00:23:19.498 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.498 "namespace": { 00:23:19.498 "nsid": 1, 00:23:19.498 "bdev_name": "malloc0", 00:23:19.498 "nguid": "7A32425EAF064C03BBCCF39C96375BD4", 00:23:19.498 "uuid": "7a32425e-af06-4c03-bbcc-f39c96375bd4", 00:23:19.498 "no_auto_visible": false 00:23:19.498 } 00:23:19.498 } 00:23:19.498 }, 00:23:19.498 { 00:23:19.498 "method": "nvmf_subsystem_add_listener", 00:23:19.498 "params": { 00:23:19.498 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.498 "listen_address": { 00:23:19.498 "trtype": "TCP", 00:23:19.498 "adrfam": "IPv4", 00:23:19.498 "traddr": "10.0.0.2", 00:23:19.498 "trsvcid": "4420" 00:23:19.498 }, 00:23:19.498 "secure_channel": true 00:23:19.498 } 00:23:19.498 } 00:23:19.498 ] 00:23:19.498 } 00:23:19.498 ] 00:23:19.498 }' 00:23:19.498 03:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.498 03:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2715671 00:23:19.498 03:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2715671 00:23:19.498 03:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:19.498 03:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2715671 ']' 00:23:19.498 03:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.498 03:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:19.498 03:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.498 03:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:19.498 03:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.498 [2024-12-13 03:34:20.530904] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:19.498 [2024-12-13 03:34:20.531000] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:19.498 [2024-12-13 03:34:20.653807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.756 [2024-12-13 03:34:20.755164] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:19.757 [2024-12-13 03:34:20.755208] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:19.757 [2024-12-13 03:34:20.755218] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:19.757 [2024-12-13 03:34:20.755229] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:19.757 [2024-12-13 03:34:20.755236] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:19.757 [2024-12-13 03:34:20.756591] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.323 [2024-12-13 03:34:21.249450] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:20.323 [2024-12-13 03:34:21.281502] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:20.323 [2024-12-13 03:34:21.281768] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:20.323 03:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:20.323 03:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:20.323 03:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:20.323 03:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:20.323 03:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.323 03:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:20.323 03:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2715824 00:23:20.323 03:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2715824 /var/tmp/bdevperf.sock 00:23:20.323 03:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2715824 ']' 00:23:20.323 03:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:20.323 03:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:20.323 03:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:20.323 03:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:20.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:20.323 03:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:20.323 "subsystems": [ 00:23:20.323 { 00:23:20.323 "subsystem": "keyring", 00:23:20.323 "config": [ 00:23:20.323 { 00:23:20.323 "method": "keyring_file_add_key", 00:23:20.323 "params": { 00:23:20.323 "name": "key0", 00:23:20.323 "path": "/tmp/tmp.Ov21J81T58" 00:23:20.323 } 00:23:20.323 } 00:23:20.323 ] 00:23:20.323 }, 00:23:20.323 { 00:23:20.323 "subsystem": "iobuf", 00:23:20.323 "config": [ 00:23:20.323 { 00:23:20.323 "method": "iobuf_set_options", 00:23:20.323 "params": { 00:23:20.323 "small_pool_count": 8192, 00:23:20.323 "large_pool_count": 1024, 00:23:20.323 "small_bufsize": 8192, 00:23:20.323 "large_bufsize": 135168, 00:23:20.323 "enable_numa": false 00:23:20.323 } 00:23:20.323 } 00:23:20.323 ] 00:23:20.323 }, 00:23:20.323 { 00:23:20.323 "subsystem": "sock", 00:23:20.323 "config": [ 00:23:20.323 { 00:23:20.323 "method": "sock_set_default_impl", 00:23:20.323 "params": { 00:23:20.323 "impl_name": "posix" 00:23:20.323 } 00:23:20.323 }, 00:23:20.323 { 00:23:20.323 "method": "sock_impl_set_options", 00:23:20.323 "params": { 00:23:20.323 "impl_name": "ssl", 00:23:20.323 "recv_buf_size": 4096, 00:23:20.323 "send_buf_size": 4096, 00:23:20.323 "enable_recv_pipe": true, 00:23:20.323 "enable_quickack": false, 00:23:20.323 "enable_placement_id": 0, 00:23:20.323 "enable_zerocopy_send_server": true, 00:23:20.323 "enable_zerocopy_send_client": false, 00:23:20.323 "zerocopy_threshold": 0, 00:23:20.323 "tls_version": 0, 00:23:20.323 "enable_ktls": false 00:23:20.323 } 00:23:20.323 }, 00:23:20.323 { 00:23:20.323 "method": "sock_impl_set_options", 00:23:20.323 "params": { 00:23:20.323 "impl_name": "posix", 00:23:20.323 "recv_buf_size": 2097152, 00:23:20.323 "send_buf_size": 2097152, 00:23:20.323 "enable_recv_pipe": true, 00:23:20.323 "enable_quickack": false, 00:23:20.323 "enable_placement_id": 0, 00:23:20.323 "enable_zerocopy_send_server": true, 00:23:20.323 "enable_zerocopy_send_client": false, 00:23:20.323 "zerocopy_threshold": 0, 00:23:20.323 "tls_version": 0, 00:23:20.323 "enable_ktls": false 00:23:20.323 } 00:23:20.323 } 00:23:20.323 ] 00:23:20.323 }, 00:23:20.323 { 00:23:20.323 "subsystem": "vmd", 00:23:20.323 "config": [] 00:23:20.323 }, 00:23:20.323 { 00:23:20.323 "subsystem": "accel", 00:23:20.323 "config": [ 00:23:20.323 { 00:23:20.323 "method": "accel_set_options", 00:23:20.323 "params": { 00:23:20.323 "small_cache_size": 128, 00:23:20.323 "large_cache_size": 16, 00:23:20.323 "task_count": 2048, 00:23:20.323 "sequence_count": 2048, 00:23:20.323 "buf_count": 2048 00:23:20.323 } 00:23:20.323 } 00:23:20.323 ] 00:23:20.323 }, 00:23:20.323 { 00:23:20.323 "subsystem": "bdev", 00:23:20.323 "config": [ 00:23:20.323 { 00:23:20.323 "method": "bdev_set_options", 00:23:20.323 "params": { 00:23:20.323 "bdev_io_pool_size": 65535, 00:23:20.323 "bdev_io_cache_size": 256, 00:23:20.323 "bdev_auto_examine": true, 00:23:20.323 "iobuf_small_cache_size": 128, 00:23:20.323 "iobuf_large_cache_size": 16 00:23:20.323 } 00:23:20.323 }, 00:23:20.323 { 00:23:20.323 "method": "bdev_raid_set_options", 00:23:20.323 "params": { 00:23:20.323 "process_window_size_kb": 1024, 00:23:20.323 "process_max_bandwidth_mb_sec": 0 00:23:20.323 } 00:23:20.323 }, 00:23:20.323 { 00:23:20.323 "method": "bdev_iscsi_set_options", 00:23:20.323 "params": { 00:23:20.323 "timeout_sec": 30 00:23:20.323 } 00:23:20.323 }, 00:23:20.323 { 00:23:20.323 "method": "bdev_nvme_set_options", 00:23:20.323 "params": { 00:23:20.323 "action_on_timeout": "none", 00:23:20.323 "timeout_us": 0, 00:23:20.323 "timeout_admin_us": 0, 00:23:20.323 "keep_alive_timeout_ms": 10000, 00:23:20.323 "arbitration_burst": 0, 00:23:20.323 "low_priority_weight": 0, 00:23:20.323 "medium_priority_weight": 0, 00:23:20.323 "high_priority_weight": 0, 00:23:20.323 "nvme_adminq_poll_period_us": 10000, 00:23:20.323 "nvme_ioq_poll_period_us": 0, 00:23:20.323 "io_queue_requests": 512, 00:23:20.323 "delay_cmd_submit": true, 00:23:20.323 "transport_retry_count": 4, 00:23:20.323 "bdev_retry_count": 3, 00:23:20.324 "transport_ack_timeout": 0, 00:23:20.324 "ctrlr_loss_timeout_sec": 0, 00:23:20.324 "reconnect_delay_sec": 0, 00:23:20.324 "fast_io_fail_timeout_sec": 0, 00:23:20.324 "disable_auto_failback": false, 00:23:20.324 "generate_uuids": false, 00:23:20.324 "transport_tos": 0, 00:23:20.324 "nvme_error_stat": false, 00:23:20.324 "rdma_srq_size": 0, 00:23:20.324 "io_path_stat": false, 00:23:20.324 "allow_accel_sequence": false, 00:23:20.324 "rdma_max_cq_size": 0, 00:23:20.324 "rdma_cm_event_timeout_ms": 0, 00:23:20.324 "dhchap_digests": [ 00:23:20.324 "sha256", 00:23:20.324 "sha384", 00:23:20.324 "sha512" 00:23:20.324 ], 00:23:20.324 "dhchap_dhgroups": [ 00:23:20.324 "null", 00:23:20.324 "ffdhe2048", 00:23:20.324 "ffdhe3072", 00:23:20.324 "ffdhe4096", 00:23:20.324 "ffdhe6144", 00:23:20.324 "ffdhe8192" 00:23:20.324 ], 00:23:20.324 "rdma_umr_per_io": false 00:23:20.324 } 00:23:20.324 }, 00:23:20.324 { 00:23:20.324 "method": "bdev_nvme_attach_controller", 00:23:20.324 "params": { 00:23:20.324 "name": "TLSTEST", 00:23:20.324 "trtype": "TCP", 00:23:20.324 "adrfam": "IPv4", 00:23:20.324 "traddr": "10.0.0.2", 00:23:20.324 "trsvcid": "4420", 00:23:20.324 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.324 "prchk_reftag": false, 00:23:20.324 "prchk_guard": false, 00:23:20.324 "ctrlr_loss_timeout_sec": 0, 00:23:20.324 "reconnect_delay_sec": 0, 00:23:20.324 "fast_io_fail_timeout_sec": 0, 00:23:20.324 "psk": "key0", 00:23:20.324 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:20.324 "hdgst": false, 00:23:20.324 "ddgst": false, 00:23:20.324 "multipath": "multipath" 00:23:20.324 } 00:23:20.324 }, 00:23:20.324 { 00:23:20.324 "method": "bdev_nvme_set_hotplug", 00:23:20.324 "params": { 00:23:20.324 "period_us": 100000, 00:23:20.324 "enable": false 00:23:20.324 } 00:23:20.324 }, 00:23:20.324 { 00:23:20.324 "method": "bdev_wait_for_examine" 00:23:20.324 } 00:23:20.324 ] 00:23:20.324 }, 00:23:20.324 { 00:23:20.324 "subsystem": "nbd", 00:23:20.324 "config": [] 00:23:20.324 } 00:23:20.324 ] 00:23:20.324 }' 00:23:20.324 03:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:20.324 03:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.324 [2024-12-13 03:34:21.433096] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:20.324 [2024-12-13 03:34:21.433182] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2715824 ] 00:23:20.582 [2024-12-13 03:34:21.541769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.582 [2024-12-13 03:34:21.652232] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:21.148 [2024-12-13 03:34:22.060224] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:21.148 03:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:21.148 03:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:21.148 03:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:21.148 Running I/O for 10 seconds... 00:23:23.457 4533.00 IOPS, 17.71 MiB/s [2024-12-13T02:34:25.600Z] 4636.50 IOPS, 18.11 MiB/s [2024-12-13T02:34:26.534Z] 4585.67 IOPS, 17.91 MiB/s [2024-12-13T02:34:27.468Z] 4624.00 IOPS, 18.06 MiB/s [2024-12-13T02:34:28.402Z] 4646.80 IOPS, 18.15 MiB/s [2024-12-13T02:34:29.775Z] 4657.83 IOPS, 18.19 MiB/s [2024-12-13T02:34:30.710Z] 4613.43 IOPS, 18.02 MiB/s [2024-12-13T02:34:31.645Z] 4574.38 IOPS, 17.87 MiB/s [2024-12-13T02:34:32.579Z] 4533.22 IOPS, 17.71 MiB/s [2024-12-13T02:34:32.579Z] 4508.30 IOPS, 17.61 MiB/s 00:23:31.370 Latency(us) 00:23:31.370 [2024-12-13T02:34:32.579Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.370 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:31.370 Verification LBA range: start 0x0 length 0x2000 00:23:31.370 TLSTESTn1 : 10.02 4512.26 17.63 0.00 0.00 28322.21 5648.58 33953.89 00:23:31.370 [2024-12-13T02:34:32.579Z] =================================================================================================================== 00:23:31.370 [2024-12-13T02:34:32.579Z] Total : 4512.26 17.63 0.00 0.00 28322.21 5648.58 33953.89 00:23:31.370 { 00:23:31.370 "results": [ 00:23:31.370 { 00:23:31.370 "job": "TLSTESTn1", 00:23:31.370 "core_mask": "0x4", 00:23:31.370 "workload": "verify", 00:23:31.370 "status": "finished", 00:23:31.370 "verify_range": { 00:23:31.370 "start": 0, 00:23:31.370 "length": 8192 00:23:31.370 }, 00:23:31.370 "queue_depth": 128, 00:23:31.370 "io_size": 4096, 00:23:31.370 "runtime": 10.019369, 00:23:31.370 "iops": 4512.260203212398, 00:23:31.370 "mibps": 17.62601641879843, 00:23:31.370 "io_failed": 0, 00:23:31.370 "io_timeout": 0, 00:23:31.371 "avg_latency_us": 28322.206147228277, 00:23:31.371 "min_latency_us": 5648.579047619048, 00:23:31.371 "max_latency_us": 33953.88952380952 00:23:31.371 } 00:23:31.371 ], 00:23:31.371 "core_count": 1 00:23:31.371 } 00:23:31.371 03:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:31.371 03:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2715824 00:23:31.371 03:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2715824 ']' 00:23:31.371 03:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2715824 00:23:31.371 03:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:31.371 03:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:31.371 03:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2715824 00:23:31.371 03:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:31.371 03:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:31.371 03:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2715824' 00:23:31.371 killing process with pid 2715824 00:23:31.371 03:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2715824 00:23:31.371 Received shutdown signal, test time was about 10.000000 seconds 00:23:31.371 00:23:31.371 Latency(us) 00:23:31.371 [2024-12-13T02:34:32.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.371 [2024-12-13T02:34:32.580Z] =================================================================================================================== 00:23:31.371 [2024-12-13T02:34:32.580Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:31.371 03:34:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2715824 00:23:32.305 03:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2715671 00:23:32.305 03:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2715671 ']' 00:23:32.305 03:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2715671 00:23:32.305 03:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:32.305 03:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.305 03:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2715671 00:23:32.305 03:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:32.305 03:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:32.305 03:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2715671' 00:23:32.305 killing process with pid 2715671 00:23:32.305 03:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2715671 00:23:32.305 03:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2715671 00:23:33.680 03:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:33.680 03:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:33.680 03:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:33.680 03:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.680 03:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2717999 00:23:33.680 03:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2717999 00:23:33.680 03:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:33.680 03:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2717999 ']' 00:23:33.680 03:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.680 03:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:33.680 03:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.680 03:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:33.680 03:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.680 [2024-12-13 03:34:34.692229] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:33.680 [2024-12-13 03:34:34.692333] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.680 [2024-12-13 03:34:34.810608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.938 [2024-12-13 03:34:34.913186] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.938 [2024-12-13 03:34:34.913229] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.938 [2024-12-13 03:34:34.913239] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.938 [2024-12-13 03:34:34.913251] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.938 [2024-12-13 03:34:34.913259] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.938 [2024-12-13 03:34:34.914480] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.503 03:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:34.503 03:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:34.503 03:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:34.503 03:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:34.503 03:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.503 03:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:34.503 03:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.Ov21J81T58 00:23:34.503 03:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Ov21J81T58 00:23:34.503 03:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:34.503 [2024-12-13 03:34:35.688092] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.503 03:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:34.761 03:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:35.019 [2024-12-13 03:34:36.069091] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:35.019 [2024-12-13 03:34:36.069363] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:35.019 03:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:35.277 malloc0 00:23:35.277 03:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:35.535 03:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Ov21J81T58 00:23:35.535 03:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:35.793 03:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:35.793 03:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2718312 00:23:35.793 03:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:35.793 03:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2718312 /var/tmp/bdevperf.sock 00:23:35.793 03:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2718312 ']' 00:23:35.793 03:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:35.793 03:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:35.793 03:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:35.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:35.793 03:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:35.793 03:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.793 [2024-12-13 03:34:36.909757] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:35.793 [2024-12-13 03:34:36.909848] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2718312 ] 00:23:36.051 [2024-12-13 03:34:37.027057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.051 [2024-12-13 03:34:37.135381] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:36.617 03:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:36.617 03:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:36.617 03:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ov21J81T58 00:23:36.875 03:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:36.875 [2024-12-13 03:34:38.064532] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:37.133 nvme0n1 00:23:37.133 03:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:37.133 Running I/O for 1 seconds... 00:23:38.066 4587.00 IOPS, 17.92 MiB/s 00:23:38.066 Latency(us) 00:23:38.066 [2024-12-13T02:34:39.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.066 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:38.066 Verification LBA range: start 0x0 length 0x2000 00:23:38.066 nvme0n1 : 1.01 4649.13 18.16 0.00 0.00 27336.96 5523.75 26464.06 00:23:38.066 [2024-12-13T02:34:39.275Z] =================================================================================================================== 00:23:38.066 [2024-12-13T02:34:39.275Z] Total : 4649.13 18.16 0.00 0.00 27336.96 5523.75 26464.06 00:23:38.066 { 00:23:38.066 "results": [ 00:23:38.066 { 00:23:38.066 "job": "nvme0n1", 00:23:38.066 "core_mask": "0x2", 00:23:38.066 "workload": "verify", 00:23:38.066 "status": "finished", 00:23:38.066 "verify_range": { 00:23:38.066 "start": 0, 00:23:38.066 "length": 8192 00:23:38.066 }, 00:23:38.066 "queue_depth": 128, 00:23:38.066 "io_size": 4096, 00:23:38.066 "runtime": 1.014169, 00:23:38.066 "iops": 4649.12652624957, 00:23:38.066 "mibps": 18.160650493162382, 00:23:38.066 "io_failed": 0, 00:23:38.066 "io_timeout": 0, 00:23:38.066 "avg_latency_us": 27336.960891986066, 00:23:38.067 "min_latency_us": 5523.748571428571, 00:23:38.067 "max_latency_us": 26464.06095238095 00:23:38.067 } 00:23:38.067 ], 00:23:38.067 "core_count": 1 00:23:38.067 } 00:23:38.325 03:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2718312 00:23:38.325 03:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2718312 ']' 00:23:38.325 03:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2718312 00:23:38.325 03:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:38.325 03:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:38.325 03:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2718312 00:23:38.325 03:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:38.325 03:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:38.325 03:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2718312' 00:23:38.325 killing process with pid 2718312 00:23:38.325 03:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2718312 00:23:38.325 Received shutdown signal, test time was about 1.000000 seconds 00:23:38.325 00:23:38.325 Latency(us) 00:23:38.325 [2024-12-13T02:34:39.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.325 [2024-12-13T02:34:39.534Z] =================================================================================================================== 00:23:38.325 [2024-12-13T02:34:39.534Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:38.325 03:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2718312 00:23:39.260 03:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2717999 00:23:39.260 03:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2717999 ']' 00:23:39.260 03:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2717999 00:23:39.260 03:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:39.260 03:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:39.260 03:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2717999 00:23:39.260 03:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:39.260 03:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:39.260 03:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2717999' 00:23:39.260 killing process with pid 2717999 00:23:39.260 03:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2717999 00:23:39.260 03:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2717999 00:23:40.634 03:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:23:40.634 03:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:40.634 03:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:40.634 03:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.634 03:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2719138 00:23:40.634 03:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2719138 00:23:40.634 03:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:40.634 03:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2719138 ']' 00:23:40.634 03:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.634 03:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:40.634 03:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.634 03:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:40.634 03:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.634 [2024-12-13 03:34:41.601058] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:40.634 [2024-12-13 03:34:41.601151] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:40.634 [2024-12-13 03:34:41.719001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.634 [2024-12-13 03:34:41.821854] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:40.635 [2024-12-13 03:34:41.821899] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:40.635 [2024-12-13 03:34:41.821912] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:40.635 [2024-12-13 03:34:41.821928] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:40.635 [2024-12-13 03:34:41.821936] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:40.635 [2024-12-13 03:34:41.823391] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.200 03:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:41.200 03:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:41.200 03:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:41.200 03:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:41.200 03:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.458 03:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.458 03:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:23:41.458 03:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.458 03:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.458 [2024-12-13 03:34:42.439834] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.458 malloc0 00:23:41.458 [2024-12-13 03:34:42.493114] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:41.458 [2024-12-13 03:34:42.493392] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.458 03:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.458 03:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2719240 00:23:41.458 03:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2719240 /var/tmp/bdevperf.sock 00:23:41.458 03:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2719240 ']' 00:23:41.458 03:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:41.458 03:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:41.458 03:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:41.458 03:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:41.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:41.458 03:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:41.458 03:34:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.458 [2024-12-13 03:34:42.581735] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:41.458 [2024-12-13 03:34:42.581813] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2719240 ] 00:23:41.716 [2024-12-13 03:34:42.694346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.716 [2024-12-13 03:34:42.805161] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:42.282 03:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:42.282 03:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:42.282 03:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ov21J81T58 00:23:42.540 03:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:42.540 [2024-12-13 03:34:43.724714] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:42.798 nvme0n1 00:23:42.798 03:34:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:42.798 Running I/O for 1 seconds... 00:23:43.730 4514.00 IOPS, 17.63 MiB/s 00:23:43.730 Latency(us) 00:23:43.730 [2024-12-13T02:34:44.939Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.730 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:43.730 Verification LBA range: start 0x0 length 0x2000 00:23:43.730 nvme0n1 : 1.02 4567.71 17.84 0.00 0.00 27798.57 5929.45 28336.52 00:23:43.730 [2024-12-13T02:34:44.939Z] =================================================================================================================== 00:23:43.730 [2024-12-13T02:34:44.939Z] Total : 4567.71 17.84 0.00 0.00 27798.57 5929.45 28336.52 00:23:43.730 { 00:23:43.730 "results": [ 00:23:43.730 { 00:23:43.730 "job": "nvme0n1", 00:23:43.730 "core_mask": "0x2", 00:23:43.730 "workload": "verify", 00:23:43.730 "status": "finished", 00:23:43.730 "verify_range": { 00:23:43.730 "start": 0, 00:23:43.730 "length": 8192 00:23:43.730 }, 00:23:43.730 "queue_depth": 128, 00:23:43.730 "io_size": 4096, 00:23:43.730 "runtime": 1.016265, 00:23:43.730 "iops": 4567.706257718213, 00:23:43.730 "mibps": 17.84260256921177, 00:23:43.730 "io_failed": 0, 00:23:43.730 "io_timeout": 0, 00:23:43.730 "avg_latency_us": 27798.56650950945, 00:23:43.730 "min_latency_us": 5929.447619047619, 00:23:43.730 "max_latency_us": 28336.518095238094 00:23:43.730 } 00:23:43.730 ], 00:23:43.730 "core_count": 1 00:23:43.730 } 00:23:43.988 03:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:23:43.988 03:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.988 03:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:43.988 03:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.988 03:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:23:43.988 "subsystems": [ 00:23:43.988 { 00:23:43.988 "subsystem": "keyring", 00:23:43.988 "config": [ 00:23:43.988 { 00:23:43.988 "method": "keyring_file_add_key", 00:23:43.988 "params": { 00:23:43.988 "name": "key0", 00:23:43.988 "path": "/tmp/tmp.Ov21J81T58" 00:23:43.988 } 00:23:43.988 } 00:23:43.988 ] 00:23:43.988 }, 00:23:43.988 { 00:23:43.988 "subsystem": "iobuf", 00:23:43.988 "config": [ 00:23:43.988 { 00:23:43.988 "method": "iobuf_set_options", 00:23:43.988 "params": { 00:23:43.988 "small_pool_count": 8192, 00:23:43.988 "large_pool_count": 1024, 00:23:43.988 "small_bufsize": 8192, 00:23:43.988 "large_bufsize": 135168, 00:23:43.988 "enable_numa": false 00:23:43.988 } 00:23:43.988 } 00:23:43.988 ] 00:23:43.988 }, 00:23:43.988 { 00:23:43.988 "subsystem": "sock", 00:23:43.988 "config": [ 00:23:43.988 { 00:23:43.988 "method": "sock_set_default_impl", 00:23:43.988 "params": { 00:23:43.988 "impl_name": "posix" 00:23:43.988 } 00:23:43.989 }, 00:23:43.989 { 00:23:43.989 "method": "sock_impl_set_options", 00:23:43.989 "params": { 00:23:43.989 "impl_name": "ssl", 00:23:43.989 "recv_buf_size": 4096, 00:23:43.989 "send_buf_size": 4096, 00:23:43.989 "enable_recv_pipe": true, 00:23:43.989 "enable_quickack": false, 00:23:43.989 "enable_placement_id": 0, 00:23:43.989 "enable_zerocopy_send_server": true, 00:23:43.989 "enable_zerocopy_send_client": false, 00:23:43.989 "zerocopy_threshold": 0, 00:23:43.989 "tls_version": 0, 00:23:43.989 "enable_ktls": false 00:23:43.989 } 00:23:43.989 }, 00:23:43.989 { 00:23:43.989 "method": "sock_impl_set_options", 00:23:43.989 "params": { 00:23:43.989 "impl_name": "posix", 00:23:43.989 "recv_buf_size": 2097152, 00:23:43.989 "send_buf_size": 2097152, 00:23:43.989 "enable_recv_pipe": true, 00:23:43.989 "enable_quickack": false, 00:23:43.989 "enable_placement_id": 0, 00:23:43.989 "enable_zerocopy_send_server": true, 00:23:43.989 "enable_zerocopy_send_client": false, 00:23:43.989 "zerocopy_threshold": 0, 00:23:43.989 "tls_version": 0, 00:23:43.989 "enable_ktls": false 00:23:43.989 } 00:23:43.989 } 00:23:43.989 ] 00:23:43.989 }, 00:23:43.989 { 00:23:43.989 "subsystem": "vmd", 00:23:43.989 "config": [] 00:23:43.989 }, 00:23:43.989 { 00:23:43.989 "subsystem": "accel", 00:23:43.989 "config": [ 00:23:43.989 { 00:23:43.989 "method": "accel_set_options", 00:23:43.989 "params": { 00:23:43.989 "small_cache_size": 128, 00:23:43.989 "large_cache_size": 16, 00:23:43.989 "task_count": 2048, 00:23:43.989 "sequence_count": 2048, 00:23:43.989 "buf_count": 2048 00:23:43.989 } 00:23:43.989 } 00:23:43.989 ] 00:23:43.989 }, 00:23:43.989 { 00:23:43.989 "subsystem": "bdev", 00:23:43.989 "config": [ 00:23:43.989 { 00:23:43.989 "method": "bdev_set_options", 00:23:43.989 "params": { 00:23:43.989 "bdev_io_pool_size": 65535, 00:23:43.989 "bdev_io_cache_size": 256, 00:23:43.989 "bdev_auto_examine": true, 00:23:43.989 "iobuf_small_cache_size": 128, 00:23:43.989 "iobuf_large_cache_size": 16 00:23:43.989 } 00:23:43.989 }, 00:23:43.989 { 00:23:43.989 "method": "bdev_raid_set_options", 00:23:43.989 "params": { 00:23:43.989 "process_window_size_kb": 1024, 00:23:43.989 "process_max_bandwidth_mb_sec": 0 00:23:43.989 } 00:23:43.989 }, 00:23:43.989 { 00:23:43.989 "method": "bdev_iscsi_set_options", 00:23:43.989 "params": { 00:23:43.989 "timeout_sec": 30 00:23:43.989 } 00:23:43.989 }, 00:23:43.989 { 00:23:43.989 "method": "bdev_nvme_set_options", 00:23:43.989 "params": { 00:23:43.989 "action_on_timeout": "none", 00:23:43.989 "timeout_us": 0, 00:23:43.989 "timeout_admin_us": 0, 00:23:43.989 "keep_alive_timeout_ms": 10000, 00:23:43.989 "arbitration_burst": 0, 00:23:43.989 "low_priority_weight": 0, 00:23:43.989 "medium_priority_weight": 0, 00:23:43.989 "high_priority_weight": 0, 00:23:43.989 "nvme_adminq_poll_period_us": 10000, 00:23:43.989 "nvme_ioq_poll_period_us": 0, 00:23:43.989 "io_queue_requests": 0, 00:23:43.989 "delay_cmd_submit": true, 00:23:43.989 "transport_retry_count": 4, 00:23:43.989 "bdev_retry_count": 3, 00:23:43.989 "transport_ack_timeout": 0, 00:23:43.989 "ctrlr_loss_timeout_sec": 0, 00:23:43.989 "reconnect_delay_sec": 0, 00:23:43.989 "fast_io_fail_timeout_sec": 0, 00:23:43.989 "disable_auto_failback": false, 00:23:43.989 "generate_uuids": false, 00:23:43.989 "transport_tos": 0, 00:23:43.989 "nvme_error_stat": false, 00:23:43.989 "rdma_srq_size": 0, 00:23:43.989 "io_path_stat": false, 00:23:43.989 "allow_accel_sequence": false, 00:23:43.989 "rdma_max_cq_size": 0, 00:23:43.989 "rdma_cm_event_timeout_ms": 0, 00:23:43.989 "dhchap_digests": [ 00:23:43.989 "sha256", 00:23:43.989 "sha384", 00:23:43.989 "sha512" 00:23:43.989 ], 00:23:43.989 "dhchap_dhgroups": [ 00:23:43.989 "null", 00:23:43.989 "ffdhe2048", 00:23:43.989 "ffdhe3072", 00:23:43.989 "ffdhe4096", 00:23:43.989 "ffdhe6144", 00:23:43.989 "ffdhe8192" 00:23:43.989 ], 00:23:43.989 "rdma_umr_per_io": false 00:23:43.989 } 00:23:43.989 }, 00:23:43.989 { 00:23:43.989 "method": "bdev_nvme_set_hotplug", 00:23:43.989 "params": { 00:23:43.989 "period_us": 100000, 00:23:43.989 "enable": false 00:23:43.989 } 00:23:43.989 }, 00:23:43.989 { 00:23:43.989 "method": "bdev_malloc_create", 00:23:43.989 "params": { 00:23:43.989 "name": "malloc0", 00:23:43.989 "num_blocks": 8192, 00:23:43.989 "block_size": 4096, 00:23:43.989 "physical_block_size": 4096, 00:23:43.989 "uuid": "e09ccf85-ea6c-4238-9804-f8a94a9a91e1", 00:23:43.989 "optimal_io_boundary": 0, 00:23:43.989 "md_size": 0, 00:23:43.989 "dif_type": 0, 00:23:43.989 "dif_is_head_of_md": false, 00:23:43.989 "dif_pi_format": 0 00:23:43.989 } 00:23:43.989 }, 00:23:43.989 { 00:23:43.989 "method": "bdev_wait_for_examine" 00:23:43.989 } 00:23:43.989 ] 00:23:43.989 }, 00:23:43.989 { 00:23:43.989 "subsystem": "nbd", 00:23:43.989 "config": [] 00:23:43.989 }, 00:23:43.989 { 00:23:43.989 "subsystem": "scheduler", 00:23:43.989 "config": [ 00:23:43.989 { 00:23:43.989 "method": "framework_set_scheduler", 00:23:43.989 "params": { 00:23:43.989 "name": "static" 00:23:43.989 } 00:23:43.989 } 00:23:43.989 ] 00:23:43.989 }, 00:23:43.989 { 00:23:43.989 "subsystem": "nvmf", 00:23:43.989 "config": [ 00:23:43.989 { 00:23:43.989 "method": "nvmf_set_config", 00:23:43.989 "params": { 00:23:43.989 "discovery_filter": "match_any", 00:23:43.989 "admin_cmd_passthru": { 00:23:43.989 "identify_ctrlr": false 00:23:43.989 }, 00:23:43.989 "dhchap_digests": [ 00:23:43.989 "sha256", 00:23:43.989 "sha384", 00:23:43.989 "sha512" 00:23:43.989 ], 00:23:43.989 "dhchap_dhgroups": [ 00:23:43.989 "null", 00:23:43.989 "ffdhe2048", 00:23:43.989 "ffdhe3072", 00:23:43.989 "ffdhe4096", 00:23:43.989 "ffdhe6144", 00:23:43.989 "ffdhe8192" 00:23:43.989 ] 00:23:43.989 } 00:23:43.989 }, 00:23:43.989 { 00:23:43.989 "method": "nvmf_set_max_subsystems", 00:23:43.989 "params": { 00:23:43.989 "max_subsystems": 1024 00:23:43.989 } 00:23:43.989 }, 00:23:43.989 { 00:23:43.989 "method": "nvmf_set_crdt", 00:23:43.989 "params": { 00:23:43.989 "crdt1": 0, 00:23:43.989 "crdt2": 0, 00:23:43.989 "crdt3": 0 00:23:43.989 } 00:23:43.989 }, 00:23:43.989 { 00:23:43.989 "method": "nvmf_create_transport", 00:23:43.989 "params": { 00:23:43.989 "trtype": "TCP", 00:23:43.989 "max_queue_depth": 128, 00:23:43.989 "max_io_qpairs_per_ctrlr": 127, 00:23:43.989 "in_capsule_data_size": 4096, 00:23:43.989 "max_io_size": 131072, 00:23:43.989 "io_unit_size": 131072, 00:23:43.989 "max_aq_depth": 128, 00:23:43.989 "num_shared_buffers": 511, 00:23:43.989 "buf_cache_size": 4294967295, 00:23:43.989 "dif_insert_or_strip": false, 00:23:43.989 "zcopy": false, 00:23:43.989 "c2h_success": false, 00:23:43.989 "sock_priority": 0, 00:23:43.989 "abort_timeout_sec": 1, 00:23:43.989 "ack_timeout": 0, 00:23:43.989 "data_wr_pool_size": 0 00:23:43.989 } 00:23:43.989 }, 00:23:43.989 { 00:23:43.989 "method": "nvmf_create_subsystem", 00:23:43.989 "params": { 00:23:43.989 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.989 "allow_any_host": false, 00:23:43.989 "serial_number": "00000000000000000000", 00:23:43.989 "model_number": "SPDK bdev Controller", 00:23:43.989 "max_namespaces": 32, 00:23:43.989 "min_cntlid": 1, 00:23:43.989 "max_cntlid": 65519, 00:23:43.989 "ana_reporting": false 00:23:43.989 } 00:23:43.989 }, 00:23:43.989 { 00:23:43.989 "method": "nvmf_subsystem_add_host", 00:23:43.989 "params": { 00:23:43.989 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.989 "host": "nqn.2016-06.io.spdk:host1", 00:23:43.989 "psk": "key0" 00:23:43.989 } 00:23:43.989 }, 00:23:43.989 { 00:23:43.989 "method": "nvmf_subsystem_add_ns", 00:23:43.989 "params": { 00:23:43.989 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.989 "namespace": { 00:23:43.989 "nsid": 1, 00:23:43.989 "bdev_name": "malloc0", 00:23:43.989 "nguid": "E09CCF85EA6C42389804F8A94A9A91E1", 00:23:43.989 "uuid": "e09ccf85-ea6c-4238-9804-f8a94a9a91e1", 00:23:43.989 "no_auto_visible": false 00:23:43.989 } 00:23:43.989 } 00:23:43.989 }, 00:23:43.989 { 00:23:43.989 "method": "nvmf_subsystem_add_listener", 00:23:43.989 "params": { 00:23:43.989 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.989 "listen_address": { 00:23:43.989 "trtype": "TCP", 00:23:43.989 "adrfam": "IPv4", 00:23:43.989 "traddr": "10.0.0.2", 00:23:43.989 "trsvcid": "4420" 00:23:43.989 }, 00:23:43.989 "secure_channel": false, 00:23:43.989 "sock_impl": "ssl" 00:23:43.989 } 00:23:43.989 } 00:23:43.989 ] 00:23:43.989 } 00:23:43.989 ] 00:23:43.989 }' 00:23:43.989 03:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:44.247 03:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:23:44.247 "subsystems": [ 00:23:44.247 { 00:23:44.247 "subsystem": "keyring", 00:23:44.247 "config": [ 00:23:44.247 { 00:23:44.247 "method": "keyring_file_add_key", 00:23:44.247 "params": { 00:23:44.247 "name": "key0", 00:23:44.247 "path": "/tmp/tmp.Ov21J81T58" 00:23:44.247 } 00:23:44.247 } 00:23:44.247 ] 00:23:44.247 }, 00:23:44.247 { 00:23:44.247 "subsystem": "iobuf", 00:23:44.247 "config": [ 00:23:44.247 { 00:23:44.247 "method": "iobuf_set_options", 00:23:44.247 "params": { 00:23:44.247 "small_pool_count": 8192, 00:23:44.247 "large_pool_count": 1024, 00:23:44.247 "small_bufsize": 8192, 00:23:44.247 "large_bufsize": 135168, 00:23:44.247 "enable_numa": false 00:23:44.247 } 00:23:44.247 } 00:23:44.248 ] 00:23:44.248 }, 00:23:44.248 { 00:23:44.248 "subsystem": "sock", 00:23:44.248 "config": [ 00:23:44.248 { 00:23:44.248 "method": "sock_set_default_impl", 00:23:44.248 "params": { 00:23:44.248 "impl_name": "posix" 00:23:44.248 } 00:23:44.248 }, 00:23:44.248 { 00:23:44.248 "method": "sock_impl_set_options", 00:23:44.248 "params": { 00:23:44.248 "impl_name": "ssl", 00:23:44.248 "recv_buf_size": 4096, 00:23:44.248 "send_buf_size": 4096, 00:23:44.248 "enable_recv_pipe": true, 00:23:44.248 "enable_quickack": false, 00:23:44.248 "enable_placement_id": 0, 00:23:44.248 "enable_zerocopy_send_server": true, 00:23:44.248 "enable_zerocopy_send_client": false, 00:23:44.248 "zerocopy_threshold": 0, 00:23:44.248 "tls_version": 0, 00:23:44.248 "enable_ktls": false 00:23:44.248 } 00:23:44.248 }, 00:23:44.248 { 00:23:44.248 "method": "sock_impl_set_options", 00:23:44.248 "params": { 00:23:44.248 "impl_name": "posix", 00:23:44.248 "recv_buf_size": 2097152, 00:23:44.248 "send_buf_size": 2097152, 00:23:44.248 "enable_recv_pipe": true, 00:23:44.248 "enable_quickack": false, 00:23:44.248 "enable_placement_id": 0, 00:23:44.248 "enable_zerocopy_send_server": true, 00:23:44.248 "enable_zerocopy_send_client": false, 00:23:44.248 "zerocopy_threshold": 0, 00:23:44.248 "tls_version": 0, 00:23:44.248 "enable_ktls": false 00:23:44.248 } 00:23:44.248 } 00:23:44.248 ] 00:23:44.248 }, 00:23:44.248 { 00:23:44.248 "subsystem": "vmd", 00:23:44.248 "config": [] 00:23:44.248 }, 00:23:44.248 { 00:23:44.248 "subsystem": "accel", 00:23:44.248 "config": [ 00:23:44.248 { 00:23:44.248 "method": "accel_set_options", 00:23:44.248 "params": { 00:23:44.248 "small_cache_size": 128, 00:23:44.248 "large_cache_size": 16, 00:23:44.248 "task_count": 2048, 00:23:44.248 "sequence_count": 2048, 00:23:44.248 "buf_count": 2048 00:23:44.248 } 00:23:44.248 } 00:23:44.248 ] 00:23:44.248 }, 00:23:44.248 { 00:23:44.248 "subsystem": "bdev", 00:23:44.248 "config": [ 00:23:44.248 { 00:23:44.248 "method": "bdev_set_options", 00:23:44.248 "params": { 00:23:44.248 "bdev_io_pool_size": 65535, 00:23:44.248 "bdev_io_cache_size": 256, 00:23:44.248 "bdev_auto_examine": true, 00:23:44.248 "iobuf_small_cache_size": 128, 00:23:44.248 "iobuf_large_cache_size": 16 00:23:44.248 } 00:23:44.248 }, 00:23:44.248 { 00:23:44.248 "method": "bdev_raid_set_options", 00:23:44.248 "params": { 00:23:44.248 "process_window_size_kb": 1024, 00:23:44.248 "process_max_bandwidth_mb_sec": 0 00:23:44.248 } 00:23:44.248 }, 00:23:44.248 { 00:23:44.248 "method": "bdev_iscsi_set_options", 00:23:44.248 "params": { 00:23:44.248 "timeout_sec": 30 00:23:44.248 } 00:23:44.248 }, 00:23:44.248 { 00:23:44.248 "method": "bdev_nvme_set_options", 00:23:44.248 "params": { 00:23:44.248 "action_on_timeout": "none", 00:23:44.248 "timeout_us": 0, 00:23:44.248 "timeout_admin_us": 0, 00:23:44.248 "keep_alive_timeout_ms": 10000, 00:23:44.248 "arbitration_burst": 0, 00:23:44.248 "low_priority_weight": 0, 00:23:44.248 "medium_priority_weight": 0, 00:23:44.248 "high_priority_weight": 0, 00:23:44.248 "nvme_adminq_poll_period_us": 10000, 00:23:44.248 "nvme_ioq_poll_period_us": 0, 00:23:44.248 "io_queue_requests": 512, 00:23:44.248 "delay_cmd_submit": true, 00:23:44.248 "transport_retry_count": 4, 00:23:44.248 "bdev_retry_count": 3, 00:23:44.248 "transport_ack_timeout": 0, 00:23:44.248 "ctrlr_loss_timeout_sec": 0, 00:23:44.248 "reconnect_delay_sec": 0, 00:23:44.248 "fast_io_fail_timeout_sec": 0, 00:23:44.248 "disable_auto_failback": false, 00:23:44.248 "generate_uuids": false, 00:23:44.248 "transport_tos": 0, 00:23:44.248 "nvme_error_stat": false, 00:23:44.248 "rdma_srq_size": 0, 00:23:44.248 "io_path_stat": false, 00:23:44.248 "allow_accel_sequence": false, 00:23:44.248 "rdma_max_cq_size": 0, 00:23:44.248 "rdma_cm_event_timeout_ms": 0, 00:23:44.248 "dhchap_digests": [ 00:23:44.248 "sha256", 00:23:44.248 "sha384", 00:23:44.248 "sha512" 00:23:44.248 ], 00:23:44.248 "dhchap_dhgroups": [ 00:23:44.248 "null", 00:23:44.248 "ffdhe2048", 00:23:44.248 "ffdhe3072", 00:23:44.248 "ffdhe4096", 00:23:44.248 "ffdhe6144", 00:23:44.248 "ffdhe8192" 00:23:44.248 ], 00:23:44.248 "rdma_umr_per_io": false 00:23:44.248 } 00:23:44.248 }, 00:23:44.248 { 00:23:44.248 "method": "bdev_nvme_attach_controller", 00:23:44.248 "params": { 00:23:44.248 "name": "nvme0", 00:23:44.248 "trtype": "TCP", 00:23:44.248 "adrfam": "IPv4", 00:23:44.248 "traddr": "10.0.0.2", 00:23:44.248 "trsvcid": "4420", 00:23:44.248 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:44.248 "prchk_reftag": false, 00:23:44.248 "prchk_guard": false, 00:23:44.248 "ctrlr_loss_timeout_sec": 0, 00:23:44.248 "reconnect_delay_sec": 0, 00:23:44.248 "fast_io_fail_timeout_sec": 0, 00:23:44.248 "psk": "key0", 00:23:44.248 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:44.248 "hdgst": false, 00:23:44.248 "ddgst": false, 00:23:44.248 "multipath": "multipath" 00:23:44.248 } 00:23:44.248 }, 00:23:44.248 { 00:23:44.248 "method": "bdev_nvme_set_hotplug", 00:23:44.248 "params": { 00:23:44.248 "period_us": 100000, 00:23:44.248 "enable": false 00:23:44.248 } 00:23:44.248 }, 00:23:44.248 { 00:23:44.248 "method": "bdev_enable_histogram", 00:23:44.248 "params": { 00:23:44.248 "name": "nvme0n1", 00:23:44.248 "enable": true 00:23:44.248 } 00:23:44.248 }, 00:23:44.248 { 00:23:44.248 "method": "bdev_wait_for_examine" 00:23:44.248 } 00:23:44.248 ] 00:23:44.248 }, 00:23:44.248 { 00:23:44.248 "subsystem": "nbd", 00:23:44.248 "config": [] 00:23:44.248 } 00:23:44.248 ] 00:23:44.248 }' 00:23:44.248 03:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2719240 00:23:44.248 03:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2719240 ']' 00:23:44.248 03:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2719240 00:23:44.248 03:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:44.248 03:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:44.248 03:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2719240 00:23:44.248 03:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:44.248 03:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:44.248 03:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2719240' 00:23:44.248 killing process with pid 2719240 00:23:44.248 03:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2719240 00:23:44.248 Received shutdown signal, test time was about 1.000000 seconds 00:23:44.248 00:23:44.248 Latency(us) 00:23:44.248 [2024-12-13T02:34:45.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.248 [2024-12-13T02:34:45.457Z] =================================================================================================================== 00:23:44.248 [2024-12-13T02:34:45.457Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:44.248 03:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2719240 00:23:45.182 03:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2719138 00:23:45.182 03:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2719138 ']' 00:23:45.182 03:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2719138 00:23:45.182 03:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:45.182 03:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:45.182 03:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2719138 00:23:45.182 03:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:45.182 03:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:45.182 03:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2719138' 00:23:45.182 killing process with pid 2719138 00:23:45.182 03:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2719138 00:23:45.182 03:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2719138 00:23:46.557 03:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:23:46.557 03:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:46.557 03:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:46.557 03:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.557 03:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:23:46.557 "subsystems": [ 00:23:46.557 { 00:23:46.557 "subsystem": "keyring", 00:23:46.557 "config": [ 00:23:46.557 { 00:23:46.557 "method": "keyring_file_add_key", 00:23:46.557 "params": { 00:23:46.557 "name": "key0", 00:23:46.557 "path": "/tmp/tmp.Ov21J81T58" 00:23:46.557 } 00:23:46.557 } 00:23:46.557 ] 00:23:46.557 }, 00:23:46.557 { 00:23:46.557 "subsystem": "iobuf", 00:23:46.557 "config": [ 00:23:46.557 { 00:23:46.557 "method": "iobuf_set_options", 00:23:46.557 "params": { 00:23:46.557 "small_pool_count": 8192, 00:23:46.557 "large_pool_count": 1024, 00:23:46.557 "small_bufsize": 8192, 00:23:46.557 "large_bufsize": 135168, 00:23:46.557 "enable_numa": false 00:23:46.557 } 00:23:46.557 } 00:23:46.557 ] 00:23:46.557 }, 00:23:46.557 { 00:23:46.557 "subsystem": "sock", 00:23:46.557 "config": [ 00:23:46.557 { 00:23:46.557 "method": "sock_set_default_impl", 00:23:46.557 "params": { 00:23:46.557 "impl_name": "posix" 00:23:46.557 } 00:23:46.557 }, 00:23:46.557 { 00:23:46.557 "method": "sock_impl_set_options", 00:23:46.557 "params": { 00:23:46.557 "impl_name": "ssl", 00:23:46.557 "recv_buf_size": 4096, 00:23:46.557 "send_buf_size": 4096, 00:23:46.557 "enable_recv_pipe": true, 00:23:46.557 "enable_quickack": false, 00:23:46.557 "enable_placement_id": 0, 00:23:46.557 "enable_zerocopy_send_server": true, 00:23:46.557 "enable_zerocopy_send_client": false, 00:23:46.557 "zerocopy_threshold": 0, 00:23:46.557 "tls_version": 0, 00:23:46.557 "enable_ktls": false 00:23:46.557 } 00:23:46.557 }, 00:23:46.557 { 00:23:46.557 "method": "sock_impl_set_options", 00:23:46.557 "params": { 00:23:46.557 "impl_name": "posix", 00:23:46.557 "recv_buf_size": 2097152, 00:23:46.557 "send_buf_size": 2097152, 00:23:46.557 "enable_recv_pipe": true, 00:23:46.557 "enable_quickack": false, 00:23:46.557 "enable_placement_id": 0, 00:23:46.557 "enable_zerocopy_send_server": true, 00:23:46.557 "enable_zerocopy_send_client": false, 00:23:46.557 "zerocopy_threshold": 0, 00:23:46.557 "tls_version": 0, 00:23:46.557 "enable_ktls": false 00:23:46.557 } 00:23:46.557 } 00:23:46.557 ] 00:23:46.557 }, 00:23:46.557 { 00:23:46.557 "subsystem": "vmd", 00:23:46.557 "config": [] 00:23:46.557 }, 00:23:46.557 { 00:23:46.557 "subsystem": "accel", 00:23:46.557 "config": [ 00:23:46.557 { 00:23:46.557 "method": "accel_set_options", 00:23:46.557 "params": { 00:23:46.557 "small_cache_size": 128, 00:23:46.557 "large_cache_size": 16, 00:23:46.557 "task_count": 2048, 00:23:46.557 "sequence_count": 2048, 00:23:46.557 "buf_count": 2048 00:23:46.557 } 00:23:46.557 } 00:23:46.557 ] 00:23:46.557 }, 00:23:46.557 { 00:23:46.557 "subsystem": "bdev", 00:23:46.557 "config": [ 00:23:46.557 { 00:23:46.557 "method": "bdev_set_options", 00:23:46.557 "params": { 00:23:46.557 "bdev_io_pool_size": 65535, 00:23:46.557 "bdev_io_cache_size": 256, 00:23:46.557 "bdev_auto_examine": true, 00:23:46.557 "iobuf_small_cache_size": 128, 00:23:46.557 "iobuf_large_cache_size": 16 00:23:46.557 } 00:23:46.557 }, 00:23:46.557 { 00:23:46.557 "method": "bdev_raid_set_options", 00:23:46.557 "params": { 00:23:46.557 "process_window_size_kb": 1024, 00:23:46.557 "process_max_bandwidth_mb_sec": 0 00:23:46.557 } 00:23:46.557 }, 00:23:46.557 { 00:23:46.557 "method": "bdev_iscsi_set_options", 00:23:46.557 "params": { 00:23:46.557 "timeout_sec": 30 00:23:46.557 } 00:23:46.557 }, 00:23:46.557 { 00:23:46.557 "method": "bdev_nvme_set_options", 00:23:46.557 "params": { 00:23:46.557 "action_on_timeout": "none", 00:23:46.558 "timeout_us": 0, 00:23:46.558 "timeout_admin_us": 0, 00:23:46.558 "keep_alive_timeout_ms": 10000, 00:23:46.558 "arbitration_burst": 0, 00:23:46.558 "low_priority_weight": 0, 00:23:46.558 "medium_priority_weight": 0, 00:23:46.558 "high_priority_weight": 0, 00:23:46.558 "nvme_adminq_poll_period_us": 10000, 00:23:46.558 "nvme_ioq_poll_period_us": 0, 00:23:46.558 "io_queue_requests": 0, 00:23:46.558 "delay_cmd_submit": true, 00:23:46.558 "transport_retry_count": 4, 00:23:46.558 "bdev_retry_count": 3, 00:23:46.558 "transport_ack_timeout": 0, 00:23:46.558 "ctrlr_loss_timeout_sec": 0, 00:23:46.558 "reconnect_delay_sec": 0, 00:23:46.558 "fast_io_fail_timeout_sec": 0, 00:23:46.558 "disable_auto_failback": false, 00:23:46.558 "generate_uuids": false, 00:23:46.558 "transport_tos": 0, 00:23:46.558 "nvme_error_stat": false, 00:23:46.558 "rdma_srq_size": 0, 00:23:46.558 "io_path_stat": false, 00:23:46.558 "allow_accel_sequence": false, 00:23:46.558 "rdma_max_cq_size": 0, 00:23:46.558 "rdma_cm_event_timeout_ms": 0, 00:23:46.558 "dhchap_digests": [ 00:23:46.558 "sha256", 00:23:46.558 "sha384", 00:23:46.558 "sha512" 00:23:46.558 ], 00:23:46.558 "dhchap_dhgroups": [ 00:23:46.558 "null", 00:23:46.558 "ffdhe2048", 00:23:46.558 "ffdhe3072", 00:23:46.558 "ffdhe4096", 00:23:46.558 "ffdhe6144", 00:23:46.558 "ffdhe8192" 00:23:46.558 ], 00:23:46.558 "rdma_umr_per_io": false 00:23:46.558 } 00:23:46.558 }, 00:23:46.558 { 00:23:46.558 "method": "bdev_nvme_set_hotplug", 00:23:46.558 "params": { 00:23:46.558 "period_us": 100000, 00:23:46.558 "enable": false 00:23:46.558 } 00:23:46.558 }, 00:23:46.558 { 00:23:46.558 "method": "bdev_malloc_create", 00:23:46.558 "params": { 00:23:46.558 "name": "malloc0", 00:23:46.558 "num_blocks": 8192, 00:23:46.558 "block_size": 4096, 00:23:46.558 "physical_block_size": 4096, 00:23:46.558 "uuid": "e09ccf85-ea6c-4238-9804-f8a94a9a91e1", 00:23:46.558 "optimal_io_boundary": 0, 00:23:46.558 "md_size": 0, 00:23:46.558 "dif_type": 0, 00:23:46.558 "dif_is_head_of_md": false, 00:23:46.558 "dif_pi_format": 0 00:23:46.558 } 00:23:46.558 }, 00:23:46.558 { 00:23:46.558 "method": "bdev_wait_for_examine" 00:23:46.558 } 00:23:46.558 ] 00:23:46.558 }, 00:23:46.558 { 00:23:46.558 "subsystem": "nbd", 00:23:46.558 "config": [] 00:23:46.558 }, 00:23:46.558 { 00:23:46.558 "subsystem": "scheduler", 00:23:46.558 "config": [ 00:23:46.558 { 00:23:46.558 "method": "framework_set_scheduler", 00:23:46.558 "params": { 00:23:46.558 "name": "static" 00:23:46.558 } 00:23:46.558 } 00:23:46.558 ] 00:23:46.558 }, 00:23:46.558 { 00:23:46.558 "subsystem": "nvmf", 00:23:46.558 "config": [ 00:23:46.558 { 00:23:46.558 "method": "nvmf_set_config", 00:23:46.558 "params": { 00:23:46.558 "discovery_filter": "match_any", 00:23:46.558 "admin_cmd_passthru": { 00:23:46.558 "identify_ctrlr": false 00:23:46.558 }, 00:23:46.558 "dhchap_digests": [ 00:23:46.558 "sha256", 00:23:46.558 "sha384", 00:23:46.558 "sha512" 00:23:46.558 ], 00:23:46.558 "dhchap_dhgroups": [ 00:23:46.558 "null", 00:23:46.558 "ffdhe2048", 00:23:46.558 "ffdhe3072", 00:23:46.558 "ffdhe4096", 00:23:46.558 "ffdhe6144", 00:23:46.558 "ffdhe8192" 00:23:46.558 ] 00:23:46.558 } 00:23:46.558 }, 00:23:46.558 { 00:23:46.558 "method": "nvmf_set_max_subsystems", 00:23:46.558 "params": { 00:23:46.558 "max_subsystems": 1024 00:23:46.558 } 00:23:46.558 }, 00:23:46.558 { 00:23:46.558 "method": "nvmf_set_crdt", 00:23:46.558 "params": { 00:23:46.558 "crdt1": 0, 00:23:46.558 "crdt2": 0, 00:23:46.558 "crdt3": 0 00:23:46.558 } 00:23:46.558 }, 00:23:46.558 { 00:23:46.558 "method": "nvmf_create_transport", 00:23:46.558 "params": { 00:23:46.558 "trtype": "TCP", 00:23:46.558 "max_queue_depth": 128, 00:23:46.558 "max_io_qpairs_per_ctrlr": 127, 00:23:46.558 "in_capsule_data_size": 4096, 00:23:46.558 "max_io_size": 131072, 00:23:46.558 "io_unit_size": 131072, 00:23:46.558 "max_aq_depth": 128, 00:23:46.558 "num_shared_buffers": 511, 00:23:46.558 "buf_cache_size": 4294967295, 00:23:46.558 "dif_insert_or_strip": false, 00:23:46.558 "zcopy": false, 00:23:46.558 "c2h_success": false, 00:23:46.558 "sock_priority": 0, 00:23:46.558 "abort_timeout_sec": 1, 00:23:46.558 "ack_timeout": 0, 00:23:46.558 "data_wr_pool_size": 0 00:23:46.558 } 00:23:46.558 }, 00:23:46.558 { 00:23:46.558 "method": "nvmf_create_subsystem", 00:23:46.558 "params": { 00:23:46.558 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:46.558 "allow_any_host": false, 00:23:46.558 "serial_number": "00000000000000000000", 00:23:46.558 "model_number": "SPDK bdev Controller", 00:23:46.558 "max_namespaces": 32, 00:23:46.558 "min_cntlid": 1, 00:23:46.558 "max_cntlid": 65519, 00:23:46.558 "ana_reporting": false 00:23:46.558 } 00:23:46.558 }, 00:23:46.558 { 00:23:46.558 "method": "nvmf_subsystem_add_host", 00:23:46.558 "params": { 00:23:46.558 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:46.558 "host": "nqn.2016-06.io.spdk:host1", 00:23:46.558 "psk": "key0" 00:23:46.558 } 00:23:46.558 }, 00:23:46.558 { 00:23:46.558 "method": "nvmf_subsystem_add_ns", 00:23:46.558 "params": { 00:23:46.558 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:46.558 "namespace": { 00:23:46.558 "nsid": 1, 00:23:46.558 "bdev_name": "malloc0", 00:23:46.558 "nguid": "E09CCF85EA6C42389804F8A94A9A91E1", 00:23:46.558 "uuid": "e09ccf85-ea6c-4238-9804-f8a94a9a91e1", 00:23:46.558 "no_auto_visible": false 00:23:46.558 } 00:23:46.558 } 00:23:46.558 }, 00:23:46.558 { 00:23:46.558 "method": "nvmf_subsystem_add_listener", 00:23:46.558 "params": { 00:23:46.558 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:46.558 "listen_address": { 00:23:46.558 "trtype": "TCP", 00:23:46.558 "adrfam": "IPv4", 00:23:46.558 "traddr": "10.0.0.2", 00:23:46.558 "trsvcid": "4420" 00:23:46.558 }, 00:23:46.558 "secure_channel": false, 00:23:46.558 "sock_impl": "ssl" 00:23:46.558 } 00:23:46.558 } 00:23:46.558 ] 00:23:46.558 } 00:23:46.558 ] 00:23:46.558 }' 00:23:46.558 03:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2720142 00:23:46.558 03:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2720142 00:23:46.558 03:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2720142 ']' 00:23:46.558 03:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.558 03:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:46.558 03:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:46.558 03:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.558 03:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:46.558 03:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.558 [2024-12-13 03:34:47.569091] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:46.558 [2024-12-13 03:34:47.569200] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.558 [2024-12-13 03:34:47.686432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.817 [2024-12-13 03:34:47.790647] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.817 [2024-12-13 03:34:47.790687] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.817 [2024-12-13 03:34:47.790697] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.817 [2024-12-13 03:34:47.790706] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.817 [2024-12-13 03:34:47.790714] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.817 [2024-12-13 03:34:47.792153] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.075 [2024-12-13 03:34:48.282954] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.333 [2024-12-13 03:34:48.314991] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:47.333 [2024-12-13 03:34:48.315229] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.333 03:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:47.333 03:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:47.333 03:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:47.333 03:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:47.333 03:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.333 03:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:47.333 03:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2720255 00:23:47.333 03:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2720255 /var/tmp/bdevperf.sock 00:23:47.333 03:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2720255 ']' 00:23:47.333 03:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:47.333 03:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:47.333 03:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:47.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:47.334 03:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:47.334 03:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.334 03:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:47.334 03:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:23:47.334 "subsystems": [ 00:23:47.334 { 00:23:47.334 "subsystem": "keyring", 00:23:47.334 "config": [ 00:23:47.334 { 00:23:47.334 "method": "keyring_file_add_key", 00:23:47.334 "params": { 00:23:47.334 "name": "key0", 00:23:47.334 "path": "/tmp/tmp.Ov21J81T58" 00:23:47.334 } 00:23:47.334 } 00:23:47.334 ] 00:23:47.334 }, 00:23:47.334 { 00:23:47.334 "subsystem": "iobuf", 00:23:47.334 "config": [ 00:23:47.334 { 00:23:47.334 "method": "iobuf_set_options", 00:23:47.334 "params": { 00:23:47.334 "small_pool_count": 8192, 00:23:47.334 "large_pool_count": 1024, 00:23:47.334 "small_bufsize": 8192, 00:23:47.334 "large_bufsize": 135168, 00:23:47.334 "enable_numa": false 00:23:47.334 } 00:23:47.334 } 00:23:47.334 ] 00:23:47.334 }, 00:23:47.334 { 00:23:47.334 "subsystem": "sock", 00:23:47.334 "config": [ 00:23:47.334 { 00:23:47.334 "method": "sock_set_default_impl", 00:23:47.334 "params": { 00:23:47.334 "impl_name": "posix" 00:23:47.334 } 00:23:47.334 }, 00:23:47.334 { 00:23:47.334 "method": "sock_impl_set_options", 00:23:47.334 "params": { 00:23:47.334 "impl_name": "ssl", 00:23:47.334 "recv_buf_size": 4096, 00:23:47.334 "send_buf_size": 4096, 00:23:47.334 "enable_recv_pipe": true, 00:23:47.334 "enable_quickack": false, 00:23:47.334 "enable_placement_id": 0, 00:23:47.334 "enable_zerocopy_send_server": true, 00:23:47.334 "enable_zerocopy_send_client": false, 00:23:47.334 "zerocopy_threshold": 0, 00:23:47.334 "tls_version": 0, 00:23:47.334 "enable_ktls": false 00:23:47.334 } 00:23:47.334 }, 00:23:47.334 { 00:23:47.334 "method": "sock_impl_set_options", 00:23:47.334 "params": { 00:23:47.334 "impl_name": "posix", 00:23:47.334 "recv_buf_size": 2097152, 00:23:47.334 "send_buf_size": 2097152, 00:23:47.334 "enable_recv_pipe": true, 00:23:47.334 "enable_quickack": false, 00:23:47.334 "enable_placement_id": 0, 00:23:47.334 "enable_zerocopy_send_server": true, 00:23:47.334 "enable_zerocopy_send_client": false, 00:23:47.334 "zerocopy_threshold": 0, 00:23:47.334 "tls_version": 0, 00:23:47.334 "enable_ktls": false 00:23:47.334 } 00:23:47.334 } 00:23:47.334 ] 00:23:47.334 }, 00:23:47.334 { 00:23:47.334 "subsystem": "vmd", 00:23:47.334 "config": [] 00:23:47.334 }, 00:23:47.334 { 00:23:47.334 "subsystem": "accel", 00:23:47.334 "config": [ 00:23:47.334 { 00:23:47.334 "method": "accel_set_options", 00:23:47.334 "params": { 00:23:47.334 "small_cache_size": 128, 00:23:47.334 "large_cache_size": 16, 00:23:47.334 "task_count": 2048, 00:23:47.334 "sequence_count": 2048, 00:23:47.334 "buf_count": 2048 00:23:47.334 } 00:23:47.334 } 00:23:47.334 ] 00:23:47.334 }, 00:23:47.334 { 00:23:47.334 "subsystem": "bdev", 00:23:47.334 "config": [ 00:23:47.334 { 00:23:47.334 "method": "bdev_set_options", 00:23:47.334 "params": { 00:23:47.334 "bdev_io_pool_size": 65535, 00:23:47.334 "bdev_io_cache_size": 256, 00:23:47.334 "bdev_auto_examine": true, 00:23:47.334 "iobuf_small_cache_size": 128, 00:23:47.334 "iobuf_large_cache_size": 16 00:23:47.334 } 00:23:47.334 }, 00:23:47.334 { 00:23:47.334 "method": "bdev_raid_set_options", 00:23:47.334 "params": { 00:23:47.334 "process_window_size_kb": 1024, 00:23:47.334 "process_max_bandwidth_mb_sec": 0 00:23:47.334 } 00:23:47.334 }, 00:23:47.334 { 00:23:47.334 "method": "bdev_iscsi_set_options", 00:23:47.334 "params": { 00:23:47.334 "timeout_sec": 30 00:23:47.334 } 00:23:47.334 }, 00:23:47.334 { 00:23:47.334 "method": "bdev_nvme_set_options", 00:23:47.334 "params": { 00:23:47.334 "action_on_timeout": "none", 00:23:47.334 "timeout_us": 0, 00:23:47.334 "timeout_admin_us": 0, 00:23:47.334 "keep_alive_timeout_ms": 10000, 00:23:47.334 "arbitration_burst": 0, 00:23:47.334 "low_priority_weight": 0, 00:23:47.334 "medium_priority_weight": 0, 00:23:47.334 "high_priority_weight": 0, 00:23:47.334 "nvme_adminq_poll_period_us": 10000, 00:23:47.334 "nvme_ioq_poll_period_us": 0, 00:23:47.334 "io_queue_requests": 512, 00:23:47.334 "delay_cmd_submit": true, 00:23:47.334 "transport_retry_count": 4, 00:23:47.334 "bdev_retry_count": 3, 00:23:47.334 "transport_ack_timeout": 0, 00:23:47.334 "ctrlr_loss_timeout_sec": 0, 00:23:47.334 "reconnect_delay_sec": 0, 00:23:47.334 "fast_io_fail_timeout_sec": 0, 00:23:47.334 "disable_auto_failback": false, 00:23:47.334 "generate_uuids": false, 00:23:47.334 "transport_tos": 0, 00:23:47.334 "nvme_error_stat": false, 00:23:47.334 "rdma_srq_size": 0, 00:23:47.334 "io_path_stat": false, 00:23:47.334 "allow_accel_sequence": false, 00:23:47.334 "rdma_max_cq_size": 0, 00:23:47.334 "rdma_cm_event_timeout_ms": 0, 00:23:47.334 "dhchap_digests": [ 00:23:47.334 "sha256", 00:23:47.334 "sha384", 00:23:47.334 "sha512" 00:23:47.334 ], 00:23:47.334 "dhchap_dhgroups": [ 00:23:47.334 "null", 00:23:47.334 "ffdhe2048", 00:23:47.334 "ffdhe3072", 00:23:47.334 "ffdhe4096", 00:23:47.334 "ffdhe6144", 00:23:47.334 "ffdhe8192" 00:23:47.334 ], 00:23:47.334 "rdma_umr_per_io": false 00:23:47.334 } 00:23:47.334 }, 00:23:47.334 { 00:23:47.334 "method": "bdev_nvme_attach_controller", 00:23:47.334 "params": { 00:23:47.334 "name": "nvme0", 00:23:47.334 "trtype": "TCP", 00:23:47.334 "adrfam": "IPv4", 00:23:47.334 "traddr": "10.0.0.2", 00:23:47.334 "trsvcid": "4420", 00:23:47.334 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:47.334 "prchk_reftag": false, 00:23:47.334 "prchk_guard": false, 00:23:47.334 "ctrlr_loss_timeout_sec": 0, 00:23:47.334 "reconnect_delay_sec": 0, 00:23:47.334 "fast_io_fail_timeout_sec": 0, 00:23:47.334 "psk": "key0", 00:23:47.334 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:47.334 "hdgst": false, 00:23:47.334 "ddgst": false, 00:23:47.334 "multipath": "multipath" 00:23:47.334 } 00:23:47.334 }, 00:23:47.334 { 00:23:47.334 "method": "bdev_nvme_set_hotplug", 00:23:47.334 "params": { 00:23:47.334 "period_us": 100000, 00:23:47.334 "enable": false 00:23:47.334 } 00:23:47.334 }, 00:23:47.334 { 00:23:47.334 "method": "bdev_enable_histogram", 00:23:47.334 "params": { 00:23:47.334 "name": "nvme0n1", 00:23:47.334 "enable": true 00:23:47.334 } 00:23:47.334 }, 00:23:47.334 { 00:23:47.334 "method": "bdev_wait_for_examine" 00:23:47.334 } 00:23:47.334 ] 00:23:47.334 }, 00:23:47.334 { 00:23:47.334 "subsystem": "nbd", 00:23:47.334 "config": [] 00:23:47.334 } 00:23:47.334 ] 00:23:47.334 }' 00:23:47.334 [2024-12-13 03:34:48.475581] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:47.334 [2024-12-13 03:34:48.475667] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2720255 ] 00:23:47.592 [2024-12-13 03:34:48.586427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.592 [2024-12-13 03:34:48.697131] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:48.159 [2024-12-13 03:34:49.095968] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:48.159 03:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:48.159 03:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:48.159 03:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:48.159 03:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:23:48.417 03:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.417 03:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:48.417 Running I/O for 1 seconds... 00:23:49.790 4481.00 IOPS, 17.50 MiB/s 00:23:49.790 Latency(us) 00:23:49.790 [2024-12-13T02:34:50.999Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.790 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:49.790 Verification LBA range: start 0x0 length 0x2000 00:23:49.790 nvme0n1 : 1.02 4535.72 17.72 0.00 0.00 27994.04 6865.68 26464.06 00:23:49.790 [2024-12-13T02:34:50.999Z] =================================================================================================================== 00:23:49.790 [2024-12-13T02:34:50.999Z] Total : 4535.72 17.72 0.00 0.00 27994.04 6865.68 26464.06 00:23:49.790 { 00:23:49.790 "results": [ 00:23:49.790 { 00:23:49.790 "job": "nvme0n1", 00:23:49.790 "core_mask": "0x2", 00:23:49.790 "workload": "verify", 00:23:49.790 "status": "finished", 00:23:49.790 "verify_range": { 00:23:49.790 "start": 0, 00:23:49.790 "length": 8192 00:23:49.790 }, 00:23:49.790 "queue_depth": 128, 00:23:49.790 "io_size": 4096, 00:23:49.790 "runtime": 1.016156, 00:23:49.790 "iops": 4535.720893248675, 00:23:49.790 "mibps": 17.717659739252635, 00:23:49.790 "io_failed": 0, 00:23:49.790 "io_timeout": 0, 00:23:49.790 "avg_latency_us": 27994.04342228972, 00:23:49.790 "min_latency_us": 6865.676190476191, 00:23:49.790 "max_latency_us": 26464.06095238095 00:23:49.790 } 00:23:49.790 ], 00:23:49.790 "core_count": 1 00:23:49.790 } 00:23:49.790 03:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:23:49.790 03:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:23:49.790 03:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:49.790 03:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:23:49.790 03:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:23:49.790 03:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:23:49.790 03:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:49.790 03:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:23:49.790 03:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:23:49.790 03:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:23:49.790 03:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:49.790 nvmf_trace.0 00:23:49.790 03:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:23:49.790 03:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2720255 00:23:49.790 03:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2720255 ']' 00:23:49.790 03:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2720255 00:23:49.790 03:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:49.790 03:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:49.790 03:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2720255 00:23:49.790 03:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:49.790 03:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:49.790 03:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2720255' 00:23:49.790 killing process with pid 2720255 00:23:49.790 03:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2720255 00:23:49.790 Received shutdown signal, test time was about 1.000000 seconds 00:23:49.790 00:23:49.790 Latency(us) 00:23:49.790 [2024-12-13T02:34:50.999Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.790 [2024-12-13T02:34:50.999Z] =================================================================================================================== 00:23:49.790 [2024-12-13T02:34:50.999Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:49.790 03:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2720255 00:23:50.723 03:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:50.723 03:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:50.723 03:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:23:50.723 03:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:50.723 03:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:23:50.723 03:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:50.723 03:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:50.723 rmmod nvme_tcp 00:23:50.723 rmmod nvme_fabrics 00:23:50.723 rmmod nvme_keyring 00:23:50.723 03:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:50.723 03:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:23:50.723 03:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:23:50.723 03:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2720142 ']' 00:23:50.723 03:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2720142 00:23:50.723 03:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2720142 ']' 00:23:50.723 03:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2720142 00:23:50.723 03:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:50.723 03:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:50.723 03:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2720142 00:23:50.723 03:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:50.723 03:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:50.723 03:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2720142' 00:23:50.723 killing process with pid 2720142 00:23:50.723 03:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2720142 00:23:50.723 03:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2720142 00:23:52.191 03:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:52.191 03:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:52.191 03:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:52.191 03:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:23:52.191 03:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:23:52.191 03:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:52.191 03:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:23:52.191 03:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:52.191 03:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:52.191 03:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.191 03:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:52.191 03:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.0HwdhAqQGd /tmp/tmp.yj9e9xSAzV /tmp/tmp.Ov21J81T58 00:23:54.097 00:23:54.097 real 1m46.034s 00:23:54.097 user 2m45.676s 00:23:54.097 sys 0m30.082s 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.097 ************************************ 00:23:54.097 END TEST nvmf_tls 00:23:54.097 ************************************ 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:54.097 ************************************ 00:23:54.097 START TEST nvmf_fips 00:23:54.097 ************************************ 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:54.097 * Looking for test storage... 00:23:54.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:54.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.097 --rc genhtml_branch_coverage=1 00:23:54.097 --rc genhtml_function_coverage=1 00:23:54.097 --rc genhtml_legend=1 00:23:54.097 --rc geninfo_all_blocks=1 00:23:54.097 --rc geninfo_unexecuted_blocks=1 00:23:54.097 00:23:54.097 ' 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:54.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.097 --rc genhtml_branch_coverage=1 00:23:54.097 --rc genhtml_function_coverage=1 00:23:54.097 --rc genhtml_legend=1 00:23:54.097 --rc geninfo_all_blocks=1 00:23:54.097 --rc geninfo_unexecuted_blocks=1 00:23:54.097 00:23:54.097 ' 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:54.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.097 --rc genhtml_branch_coverage=1 00:23:54.097 --rc genhtml_function_coverage=1 00:23:54.097 --rc genhtml_legend=1 00:23:54.097 --rc geninfo_all_blocks=1 00:23:54.097 --rc geninfo_unexecuted_blocks=1 00:23:54.097 00:23:54.097 ' 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:54.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.097 --rc genhtml_branch_coverage=1 00:23:54.097 --rc genhtml_function_coverage=1 00:23:54.097 --rc genhtml_legend=1 00:23:54.097 --rc geninfo_all_blocks=1 00:23:54.097 --rc geninfo_unexecuted_blocks=1 00:23:54.097 00:23:54.097 ' 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.097 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.098 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.098 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:54.098 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.098 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:23:54.098 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:54.098 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:54.098 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:54.098 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:54.098 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:54.098 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:54.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:54.098 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:54.098 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:54.098 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:54.098 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:54.098 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:23:54.098 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:23:54.098 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:23:54.098 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:23:54.098 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:23:54.098 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:23:54.098 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:54.098 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:54.098 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:54.098 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:54.098 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:54.098 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:54.098 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:23:54.098 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:23:54.098 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:23:54.098 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:54.098 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:54.098 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:23:54.098 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:54.098 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:54.098 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:23:54.356 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:54.356 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:54.356 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:54.356 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:23:54.356 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:23:54.356 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:54.356 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:54.356 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:54.356 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:23:54.356 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:54.356 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:23:54.357 Error setting digest 00:23:54.357 4022340C197F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:23:54.357 4022340C197F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:23:54.357 03:34:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:59.631 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:59.631 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:23:59.631 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:59.631 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:59.631 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:59.631 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:59.631 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:59.631 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:23:59.631 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:59.631 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:23:59.631 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:23:59.631 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:23:59.631 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:23:59.631 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:23:59.631 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:23:59.631 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:59.632 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:59.632 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:59.632 Found net devices under 0000:af:00.0: cvl_0_0 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:59.632 Found net devices under 0000:af:00.1: cvl_0_1 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:59.632 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:59.891 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:59.891 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:59.891 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:59.891 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:59.891 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:59.891 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:59.891 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:59.891 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:59.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:59.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:23:59.891 00:23:59.891 --- 10.0.0.2 ping statistics --- 00:23:59.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.891 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:23:59.891 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:59.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:59.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:23:59.891 00:23:59.891 --- 10.0.0.1 ping statistics --- 00:23:59.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.891 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:23:59.891 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:59.891 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:23:59.891 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:59.891 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:59.891 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:59.891 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:59.891 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:59.891 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:59.891 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:59.891 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:23:59.891 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:59.891 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:59.891 03:35:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:59.891 03:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2724421 00:23:59.891 03:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:59.891 03:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2724421 00:23:59.891 03:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2724421 ']' 00:23:59.891 03:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.891 03:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:59.891 03:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:59.891 03:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:59.891 03:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:00.150 [2024-12-13 03:35:01.109148] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:24:00.150 [2024-12-13 03:35:01.109238] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.150 [2024-12-13 03:35:01.227427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.150 [2024-12-13 03:35:01.337997] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.150 [2024-12-13 03:35:01.338040] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.150 [2024-12-13 03:35:01.338051] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.150 [2024-12-13 03:35:01.338062] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.150 [2024-12-13 03:35:01.338070] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.150 [2024-12-13 03:35:01.339529] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.717 03:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:00.717 03:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:00.717 03:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:00.717 03:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:00.717 03:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:00.717 03:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:00.717 03:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:00.717 03:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:00.717 03:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:00.717 03:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.N4o 00:24:00.717 03:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:00.717 03:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.N4o 00:24:00.717 03:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.N4o 00:24:00.717 03:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.N4o 00:24:00.717 03:35:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:00.976 [2024-12-13 03:35:02.087100] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:00.976 [2024-12-13 03:35:02.103089] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:00.976 [2024-12-13 03:35:02.103327] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:00.976 malloc0 00:24:01.235 03:35:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:01.235 03:35:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:01.235 03:35:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2724592 00:24:01.235 03:35:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2724592 /var/tmp/bdevperf.sock 00:24:01.235 03:35:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2724592 ']' 00:24:01.235 03:35:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:01.235 03:35:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:01.235 03:35:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:01.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:01.236 03:35:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:01.236 03:35:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:01.236 [2024-12-13 03:35:02.283436] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:24:01.236 [2024-12-13 03:35:02.283526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2724592 ] 00:24:01.236 [2024-12-13 03:35:02.396886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.494 [2024-12-13 03:35:02.504626] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:24:02.062 03:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:02.062 03:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:02.062 03:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.N4o 00:24:02.062 03:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:02.320 [2024-12-13 03:35:03.410640] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:02.320 TLSTESTn1 00:24:02.320 03:35:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:02.578 Running I/O for 10 seconds... 00:24:04.456 4469.00 IOPS, 17.46 MiB/s [2024-12-13T02:35:07.042Z] 4535.50 IOPS, 17.72 MiB/s [2024-12-13T02:35:07.978Z] 4566.67 IOPS, 17.84 MiB/s [2024-12-13T02:35:08.914Z] 4582.75 IOPS, 17.90 MiB/s [2024-12-13T02:35:09.849Z] 4572.40 IOPS, 17.86 MiB/s [2024-12-13T02:35:10.784Z] 4599.17 IOPS, 17.97 MiB/s [2024-12-13T02:35:11.720Z] 4600.43 IOPS, 17.97 MiB/s [2024-12-13T02:35:12.657Z] 4605.12 IOPS, 17.99 MiB/s [2024-12-13T02:35:14.035Z] 4619.22 IOPS, 18.04 MiB/s [2024-12-13T02:35:14.035Z] 4626.90 IOPS, 18.07 MiB/s 00:24:12.826 Latency(us) 00:24:12.826 [2024-12-13T02:35:14.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.826 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:12.826 Verification LBA range: start 0x0 length 0x2000 00:24:12.826 TLSTESTn1 : 10.02 4631.23 18.09 0.00 0.00 27591.83 7645.87 30458.64 00:24:12.826 [2024-12-13T02:35:14.035Z] =================================================================================================================== 00:24:12.826 [2024-12-13T02:35:14.035Z] Total : 4631.23 18.09 0.00 0.00 27591.83 7645.87 30458.64 00:24:12.826 { 00:24:12.826 "results": [ 00:24:12.826 { 00:24:12.826 "job": "TLSTESTn1", 00:24:12.826 "core_mask": "0x4", 00:24:12.826 "workload": "verify", 00:24:12.826 "status": "finished", 00:24:12.826 "verify_range": { 00:24:12.826 "start": 0, 00:24:12.826 "length": 8192 00:24:12.826 }, 00:24:12.826 "queue_depth": 128, 00:24:12.826 "io_size": 4096, 00:24:12.826 "runtime": 10.017864, 00:24:12.826 "iops": 4631.226776486485, 00:24:12.826 "mibps": 18.09072959565033, 00:24:12.826 "io_failed": 0, 00:24:12.826 "io_timeout": 0, 00:24:12.826 "avg_latency_us": 27591.83381099154, 00:24:12.826 "min_latency_us": 7645.866666666667, 00:24:12.826 "max_latency_us": 30458.63619047619 00:24:12.826 } 00:24:12.826 ], 00:24:12.826 "core_count": 1 00:24:12.826 } 00:24:12.826 03:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:12.826 03:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:12.826 03:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:24:12.826 03:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:24:12.826 03:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:12.826 03:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:12.826 03:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:12.826 03:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:12.826 03:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:12.826 03:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:12.826 nvmf_trace.0 00:24:12.826 03:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:24:12.826 03:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2724592 00:24:12.826 03:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2724592 ']' 00:24:12.826 03:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2724592 00:24:12.826 03:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:12.826 03:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:12.826 03:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2724592 00:24:12.826 03:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:12.826 03:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:12.826 03:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2724592' 00:24:12.826 killing process with pid 2724592 00:24:12.826 03:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2724592 00:24:12.826 Received shutdown signal, test time was about 10.000000 seconds 00:24:12.826 00:24:12.826 Latency(us) 00:24:12.826 [2024-12-13T02:35:14.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.826 [2024-12-13T02:35:14.035Z] =================================================================================================================== 00:24:12.826 [2024-12-13T02:35:14.035Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:12.826 03:35:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2724592 00:24:13.764 03:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:13.764 03:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:13.764 03:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:13.764 03:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:13.764 03:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:13.764 03:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:13.764 03:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:13.764 rmmod nvme_tcp 00:24:13.764 rmmod nvme_fabrics 00:24:13.764 rmmod nvme_keyring 00:24:13.764 03:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:13.764 03:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:13.764 03:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:13.764 03:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2724421 ']' 00:24:13.764 03:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2724421 00:24:13.764 03:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2724421 ']' 00:24:13.764 03:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2724421 00:24:13.764 03:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:13.764 03:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:13.764 03:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2724421 00:24:13.764 03:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:13.764 03:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:13.764 03:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2724421' 00:24:13.764 killing process with pid 2724421 00:24:13.764 03:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2724421 00:24:13.764 03:35:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2724421 00:24:15.148 03:35:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:15.148 03:35:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:15.148 03:35:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:15.148 03:35:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:15.148 03:35:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:15.148 03:35:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:15.148 03:35:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:15.148 03:35:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:15.148 03:35:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:15.148 03:35:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.148 03:35:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:15.148 03:35:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.051 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:17.051 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.N4o 00:24:17.051 00:24:17.051 real 0m23.061s 00:24:17.051 user 0m25.947s 00:24:17.051 sys 0m9.164s 00:24:17.051 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:17.051 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:17.051 ************************************ 00:24:17.051 END TEST nvmf_fips 00:24:17.051 ************************************ 00:24:17.051 03:35:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:17.051 03:35:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:17.051 03:35:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:17.051 03:35:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:17.051 ************************************ 00:24:17.051 START TEST nvmf_control_msg_list 00:24:17.051 ************************************ 00:24:17.051 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:17.310 * Looking for test storage... 00:24:17.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:17.310 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:17.310 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:24:17.310 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:17.310 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:17.310 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:17.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.311 --rc genhtml_branch_coverage=1 00:24:17.311 --rc genhtml_function_coverage=1 00:24:17.311 --rc genhtml_legend=1 00:24:17.311 --rc geninfo_all_blocks=1 00:24:17.311 --rc geninfo_unexecuted_blocks=1 00:24:17.311 00:24:17.311 ' 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:17.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.311 --rc genhtml_branch_coverage=1 00:24:17.311 --rc genhtml_function_coverage=1 00:24:17.311 --rc genhtml_legend=1 00:24:17.311 --rc geninfo_all_blocks=1 00:24:17.311 --rc geninfo_unexecuted_blocks=1 00:24:17.311 00:24:17.311 ' 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:17.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.311 --rc genhtml_branch_coverage=1 00:24:17.311 --rc genhtml_function_coverage=1 00:24:17.311 --rc genhtml_legend=1 00:24:17.311 --rc geninfo_all_blocks=1 00:24:17.311 --rc geninfo_unexecuted_blocks=1 00:24:17.311 00:24:17.311 ' 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:17.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.311 --rc genhtml_branch_coverage=1 00:24:17.311 --rc genhtml_function_coverage=1 00:24:17.311 --rc genhtml_legend=1 00:24:17.311 --rc geninfo_all_blocks=1 00:24:17.311 --rc geninfo_unexecuted_blocks=1 00:24:17.311 00:24:17.311 ' 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:17.311 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:17.311 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:17.312 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.312 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:17.312 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.312 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:17.312 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:17.312 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:17.312 03:35:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:22.581 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:22.581 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:22.581 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:22.581 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:22.581 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:22.581 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:22.581 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:22.581 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:22.581 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:22.582 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:22.582 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:22.582 Found net devices under 0000:af:00.0: cvl_0_0 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:22.582 Found net devices under 0000:af:00.1: cvl_0_1 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:22.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:22.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:24:22.582 00:24:22.582 --- 10.0.0.2 ping statistics --- 00:24:22.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.582 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:22.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:22.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:24:22.582 00:24:22.582 --- 10.0.0.1 ping statistics --- 00:24:22.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.582 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:22.582 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:22.583 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:22.583 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:22.583 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:22.583 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:22.583 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:22.583 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:22.583 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:22.583 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2730261 00:24:22.583 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:22.583 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2730261 00:24:22.583 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2730261 ']' 00:24:22.583 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:22.583 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:22.583 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:22.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:22.583 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:22.583 03:35:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:22.583 [2024-12-13 03:35:23.669592] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:24:22.583 [2024-12-13 03:35:23.669683] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:22.583 [2024-12-13 03:35:23.785140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.841 [2024-12-13 03:35:23.887456] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:22.841 [2024-12-13 03:35:23.887500] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:22.841 [2024-12-13 03:35:23.887510] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:22.841 [2024-12-13 03:35:23.887520] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:22.841 [2024-12-13 03:35:23.887527] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:22.841 [2024-12-13 03:35:23.888872] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.410 03:35:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:23.410 03:35:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:24:23.410 03:35:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:23.410 03:35:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:23.410 03:35:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:23.410 03:35:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:23.410 03:35:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:23.410 03:35:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:23.410 03:35:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:23.410 03:35:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.410 03:35:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:23.410 [2024-12-13 03:35:24.509110] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.410 03:35:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.410 03:35:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:23.410 03:35:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.410 03:35:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:23.410 03:35:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.410 03:35:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:23.410 03:35:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.410 03:35:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:23.410 Malloc0 00:24:23.410 03:35:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.410 03:35:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:23.410 03:35:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.410 03:35:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:23.410 03:35:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.410 03:35:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:23.410 03:35:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.410 03:35:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:23.410 [2024-12-13 03:35:24.573263] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:23.410 03:35:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.410 03:35:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2730313 00:24:23.410 03:35:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:23.410 03:35:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2730314 00:24:23.410 03:35:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:23.410 03:35:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2730315 00:24:23.410 03:35:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2730313 00:24:23.410 03:35:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:23.669 [2024-12-13 03:35:24.693455] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:23.669 [2024-12-13 03:35:24.693733] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:23.669 [2024-12-13 03:35:24.693987] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:24.603 Initializing NVMe Controllers 00:24:24.603 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:24.603 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:24.603 Initialization complete. Launching workers. 00:24:24.603 ======================================================== 00:24:24.604 Latency(us) 00:24:24.604 Device Information : IOPS MiB/s Average min max 00:24:24.604 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 5566.00 21.74 179.25 146.56 1002.93 00:24:24.604 ======================================================== 00:24:24.604 Total : 5566.00 21.74 179.25 146.56 1002.93 00:24:24.604 00:24:24.604 Initializing NVMe Controllers 00:24:24.604 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:24.604 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:24.604 Initialization complete. Launching workers. 00:24:24.604 ======================================================== 00:24:24.604 Latency(us) 00:24:24.604 Device Information : IOPS MiB/s Average min max 00:24:24.604 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 33.00 0.13 31103.76 237.86 41958.16 00:24:24.604 ======================================================== 00:24:24.604 Total : 33.00 0.13 31103.76 237.86 41958.16 00:24:24.604 00:24:24.604 Initializing NVMe Controllers 00:24:24.604 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:24.604 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:24.604 Initialization complete. Launching workers. 00:24:24.604 ======================================================== 00:24:24.604 Latency(us) 00:24:24.604 Device Information : IOPS MiB/s Average min max 00:24:24.604 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 41193.14 40836.34 41903.12 00:24:24.604 ======================================================== 00:24:24.604 Total : 25.00 0.10 41193.14 40836.34 41903.12 00:24:24.604 00:24:24.863 03:35:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2730314 00:24:24.863 03:35:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2730315 00:24:24.863 03:35:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:24.863 03:35:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:24.863 03:35:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:24.863 03:35:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:24.863 03:35:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:24.863 03:35:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:24.863 03:35:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:24.863 03:35:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:24.863 rmmod nvme_tcp 00:24:24.863 rmmod nvme_fabrics 00:24:24.863 rmmod nvme_keyring 00:24:24.863 03:35:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:24.863 03:35:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:24.863 03:35:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:24.863 03:35:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2730261 ']' 00:24:24.863 03:35:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2730261 00:24:24.863 03:35:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2730261 ']' 00:24:24.863 03:35:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2730261 00:24:24.863 03:35:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:24:24.863 03:35:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:24.863 03:35:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2730261 00:24:24.863 03:35:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:24.863 03:35:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:24.863 03:35:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2730261' 00:24:24.863 killing process with pid 2730261 00:24:24.863 03:35:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2730261 00:24:24.863 03:35:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2730261 00:24:26.241 03:35:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:26.241 03:35:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:26.241 03:35:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:26.241 03:35:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:24:26.241 03:35:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:24:26.241 03:35:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:26.241 03:35:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:24:26.241 03:35:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:26.241 03:35:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:26.241 03:35:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.241 03:35:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.241 03:35:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.147 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:28.147 00:24:28.147 real 0m10.970s 00:24:28.147 user 0m8.082s 00:24:28.147 sys 0m4.877s 00:24:28.148 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:28.148 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:28.148 ************************************ 00:24:28.148 END TEST nvmf_control_msg_list 00:24:28.148 ************************************ 00:24:28.148 03:35:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:28.148 03:35:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:28.148 03:35:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:28.148 03:35:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:28.148 ************************************ 00:24:28.148 START TEST nvmf_wait_for_buf 00:24:28.148 ************************************ 00:24:28.148 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:28.148 * Looking for test storage... 00:24:28.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:28.148 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:28.148 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:24:28.148 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:28.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.407 --rc genhtml_branch_coverage=1 00:24:28.407 --rc genhtml_function_coverage=1 00:24:28.407 --rc genhtml_legend=1 00:24:28.407 --rc geninfo_all_blocks=1 00:24:28.407 --rc geninfo_unexecuted_blocks=1 00:24:28.407 00:24:28.407 ' 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:28.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.407 --rc genhtml_branch_coverage=1 00:24:28.407 --rc genhtml_function_coverage=1 00:24:28.407 --rc genhtml_legend=1 00:24:28.407 --rc geninfo_all_blocks=1 00:24:28.407 --rc geninfo_unexecuted_blocks=1 00:24:28.407 00:24:28.407 ' 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:28.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.407 --rc genhtml_branch_coverage=1 00:24:28.407 --rc genhtml_function_coverage=1 00:24:28.407 --rc genhtml_legend=1 00:24:28.407 --rc geninfo_all_blocks=1 00:24:28.407 --rc geninfo_unexecuted_blocks=1 00:24:28.407 00:24:28.407 ' 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:28.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.407 --rc genhtml_branch_coverage=1 00:24:28.407 --rc genhtml_function_coverage=1 00:24:28.407 --rc genhtml_legend=1 00:24:28.407 --rc geninfo_all_blocks=1 00:24:28.407 --rc geninfo_unexecuted_blocks=1 00:24:28.407 00:24:28.407 ' 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.407 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.408 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:28.408 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.408 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:28.408 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:28.408 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.408 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.408 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.408 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.408 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.408 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.408 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:28.408 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.408 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:28.408 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:28.408 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:28.408 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.408 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.408 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.408 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:28.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:28.408 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:28.408 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:28.408 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:28.408 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:28.408 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:28.408 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.408 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:28.408 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:28.408 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:28.408 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.408 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.408 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.408 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:28.408 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:28.408 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:28.408 03:35:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:33.682 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:33.682 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:33.682 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:33.682 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:33.682 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:33.682 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:33.682 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:33.682 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:24:33.682 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:33.682 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:24:33.682 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:24:33.682 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:24:33.682 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:24:33.682 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:33.683 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:33.683 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:33.683 Found net devices under 0000:af:00.0: cvl_0_0 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:33.683 Found net devices under 0000:af:00.1: cvl_0_1 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:33.683 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:33.684 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:33.684 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:33.684 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:33.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:33.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:24:33.684 00:24:33.684 --- 10.0.0.2 ping statistics --- 00:24:33.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.684 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:24:33.684 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:33.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:33.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:24:33.684 00:24:33.684 --- 10.0.0.1 ping statistics --- 00:24:33.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.684 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:24:33.684 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:33.684 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:24:33.684 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:33.684 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:33.684 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:33.684 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:33.684 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:33.684 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:33.684 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:33.684 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:33.684 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:33.684 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:33.684 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:33.684 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2734214 00:24:33.684 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2734214 00:24:33.684 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:33.684 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2734214 ']' 00:24:33.684 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.684 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:33.684 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.684 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:33.684 03:35:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:33.943 [2024-12-13 03:35:34.919301] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:24:33.943 [2024-12-13 03:35:34.919393] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:33.943 [2024-12-13 03:35:35.035860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.943 [2024-12-13 03:35:35.141356] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:33.943 [2024-12-13 03:35:35.141395] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:33.943 [2024-12-13 03:35:35.141406] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:33.943 [2024-12-13 03:35:35.141416] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:33.943 [2024-12-13 03:35:35.141424] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:33.943 [2024-12-13 03:35:35.142845] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.509 03:35:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:34.509 03:35:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:24:34.767 03:35:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:34.767 03:35:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:34.767 03:35:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:34.767 03:35:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:34.767 03:35:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:34.767 03:35:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:34.767 03:35:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:34.767 03:35:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.767 03:35:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:34.767 03:35:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.767 03:35:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:34.767 03:35:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.767 03:35:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:34.767 03:35:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.767 03:35:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:34.767 03:35:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.767 03:35:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:35.026 03:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.026 03:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:35.026 03:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.026 03:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:35.026 Malloc0 00:24:35.026 03:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.026 03:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:35.026 03:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.026 03:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:35.026 [2024-12-13 03:35:36.065221] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.026 03:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.026 03:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:35.026 03:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.026 03:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:35.026 03:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.026 03:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:35.026 03:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.026 03:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:35.026 03:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.026 03:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:35.026 03:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.026 03:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:35.026 [2024-12-13 03:35:36.093452] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.026 03:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.026 03:35:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:35.026 [2024-12-13 03:35:36.212040] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:36.924 Initializing NVMe Controllers 00:24:36.924 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:36.924 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:36.924 Initialization complete. Launching workers. 00:24:36.924 ======================================================== 00:24:36.924 Latency(us) 00:24:36.924 Device Information : IOPS MiB/s Average min max 00:24:36.924 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 30.98 3.87 135142.04 7152.22 194540.48 00:24:36.924 ======================================================== 00:24:36.924 Total : 30.98 3.87 135142.04 7152.22 194540.48 00:24:36.924 00:24:36.924 03:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:36.924 03:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:36.925 03:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.925 03:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:36.925 03:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.925 03:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=470 00:24:36.925 03:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 470 -eq 0 ]] 00:24:36.925 03:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:36.925 03:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:36.925 03:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:36.925 03:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:24:36.925 03:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:36.925 03:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:24:36.925 03:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:36.925 03:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:36.925 rmmod nvme_tcp 00:24:36.925 rmmod nvme_fabrics 00:24:36.925 rmmod nvme_keyring 00:24:36.925 03:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:36.925 03:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:24:36.925 03:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:24:36.925 03:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2734214 ']' 00:24:36.925 03:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2734214 00:24:36.925 03:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2734214 ']' 00:24:36.925 03:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2734214 00:24:36.925 03:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:24:36.925 03:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:36.925 03:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2734214 00:24:36.925 03:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:36.925 03:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:36.925 03:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2734214' 00:24:36.925 killing process with pid 2734214 00:24:36.925 03:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2734214 00:24:36.925 03:35:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2734214 00:24:37.861 03:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:37.861 03:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:37.861 03:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:37.861 03:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:24:37.861 03:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:37.861 03:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:24:37.861 03:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:24:37.861 03:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:37.861 03:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:37.861 03:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.861 03:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:37.861 03:35:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:40.397 00:24:40.397 real 0m11.799s 00:24:40.397 user 0m5.852s 00:24:40.397 sys 0m4.543s 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:40.397 ************************************ 00:24:40.397 END TEST nvmf_wait_for_buf 00:24:40.397 ************************************ 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:40.397 ************************************ 00:24:40.397 START TEST nvmf_fuzz 00:24:40.397 ************************************ 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:40.397 * Looking for test storage... 00:24:40.397 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:40.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.397 --rc genhtml_branch_coverage=1 00:24:40.397 --rc genhtml_function_coverage=1 00:24:40.397 --rc genhtml_legend=1 00:24:40.397 --rc geninfo_all_blocks=1 00:24:40.397 --rc geninfo_unexecuted_blocks=1 00:24:40.397 00:24:40.397 ' 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:40.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.397 --rc genhtml_branch_coverage=1 00:24:40.397 --rc genhtml_function_coverage=1 00:24:40.397 --rc genhtml_legend=1 00:24:40.397 --rc geninfo_all_blocks=1 00:24:40.397 --rc geninfo_unexecuted_blocks=1 00:24:40.397 00:24:40.397 ' 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:40.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.397 --rc genhtml_branch_coverage=1 00:24:40.397 --rc genhtml_function_coverage=1 00:24:40.397 --rc genhtml_legend=1 00:24:40.397 --rc geninfo_all_blocks=1 00:24:40.397 --rc geninfo_unexecuted_blocks=1 00:24:40.397 00:24:40.397 ' 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:40.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.397 --rc genhtml_branch_coverage=1 00:24:40.397 --rc genhtml_function_coverage=1 00:24:40.397 --rc genhtml_legend=1 00:24:40.397 --rc geninfo_all_blocks=1 00:24:40.397 --rc geninfo_unexecuted_blocks=1 00:24:40.397 00:24:40.397 ' 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.397 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.398 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.398 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:40.398 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.398 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:24:40.398 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:40.398 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:40.398 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:40.398 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:40.398 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:40.398 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:40.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:40.398 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:40.398 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:40.398 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:40.398 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:40.398 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:40.398 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:40.398 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:40.398 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:40.398 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:40.398 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.398 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:40.398 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.398 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:40.398 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:40.398 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:24:40.398 03:35:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:45.667 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:45.667 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:24:45.667 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:45.667 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:45.667 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:45.667 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:45.667 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:45.667 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:24:45.667 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:45.667 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:24:45.667 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:24:45.667 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:24:45.667 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:24:45.667 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:24:45.667 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:24:45.667 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:45.667 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:45.667 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:45.667 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:45.667 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:45.668 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:45.668 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:45.668 Found net devices under 0000:af:00.0: cvl_0_0 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:45.668 Found net devices under 0000:af:00.1: cvl_0_1 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:45.668 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:45.669 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:45.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:45.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:24:45.669 00:24:45.669 --- 10.0.0.2 ping statistics --- 00:24:45.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.669 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:24:45.669 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:45.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:45.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:24:45.669 00:24:45.669 --- 10.0.0.1 ping statistics --- 00:24:45.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.669 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:24:45.669 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:45.669 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:24:45.669 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:45.669 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:45.669 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:45.669 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:45.669 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:45.669 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:45.669 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:45.669 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=2738147 00:24:45.669 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:45.669 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:45.669 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 2738147 00:24:45.669 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2738147 ']' 00:24:45.669 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.669 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:45.669 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.669 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:45.669 03:35:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:46.653 03:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:46.653 03:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:24:46.653 03:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:46.653 03:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.653 03:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:46.653 03:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.653 03:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:46.653 03:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.653 03:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:46.653 Malloc0 00:24:46.653 03:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.653 03:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:46.653 03:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.653 03:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:46.654 03:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.654 03:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:46.654 03:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.654 03:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:46.654 03:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.654 03:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:46.654 03:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.654 03:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:46.654 03:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.654 03:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:46.654 03:35:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:18.819 Fuzzing completed. Shutting down the fuzz application 00:25:18.819 00:25:18.819 Dumping successful admin opcodes: 00:25:18.819 9, 10, 00:25:18.819 Dumping successful io opcodes: 00:25:18.819 0, 9, 00:25:18.819 NS: 0x2000008efec0 I/O qp, Total commands completed: 683422, total successful commands: 3989, random_seed: 3188276672 00:25:18.819 NS: 0x2000008efec0 admin qp, Total commands completed: 77616, total successful commands: 16, random_seed: 1662506176 00:25:18.819 03:36:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:18.819 Fuzzing completed. Shutting down the fuzz application 00:25:18.819 00:25:18.819 Dumping successful admin opcodes: 00:25:18.819 00:25:18.819 Dumping successful io opcodes: 00:25:18.819 00:25:18.819 NS: 0x2000008efec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1144014274 00:25:18.819 NS: 0x2000008efec0 admin qp, Total commands completed: 16, total successful commands: 0, random_seed: 1144113672 00:25:18.819 03:36:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:18.819 03:36:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.819 03:36:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:18.819 03:36:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.819 03:36:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:18.819 03:36:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:18.819 03:36:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:18.819 03:36:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:18.819 03:36:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:18.819 03:36:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:18.819 03:36:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:18.819 03:36:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:18.819 rmmod nvme_tcp 00:25:18.819 rmmod nvme_fabrics 00:25:19.078 rmmod nvme_keyring 00:25:19.078 03:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:19.078 03:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:19.078 03:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:19.078 03:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 2738147 ']' 00:25:19.078 03:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 2738147 00:25:19.078 03:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2738147 ']' 00:25:19.078 03:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 2738147 00:25:19.078 03:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:25:19.078 03:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:19.078 03:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2738147 00:25:19.079 03:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:19.079 03:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:19.079 03:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2738147' 00:25:19.079 killing process with pid 2738147 00:25:19.079 03:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 2738147 00:25:19.079 03:36:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 2738147 00:25:20.457 03:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:20.457 03:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:20.457 03:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:20.457 03:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:20.457 03:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:25:20.457 03:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:20.457 03:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:25:20.457 03:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:20.457 03:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:20.457 03:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.457 03:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:20.457 03:36:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.362 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:22.362 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:22.362 00:25:22.362 real 0m42.406s 00:25:22.362 user 0m56.818s 00:25:22.362 sys 0m15.762s 00:25:22.362 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:22.362 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:22.362 ************************************ 00:25:22.362 END TEST nvmf_fuzz 00:25:22.362 ************************************ 00:25:22.362 03:36:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:22.362 03:36:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:22.362 03:36:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:22.362 03:36:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:22.362 ************************************ 00:25:22.362 START TEST nvmf_multiconnection 00:25:22.362 ************************************ 00:25:22.362 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:22.622 * Looking for test storage... 00:25:22.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lcov --version 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:22.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.622 --rc genhtml_branch_coverage=1 00:25:22.622 --rc genhtml_function_coverage=1 00:25:22.622 --rc genhtml_legend=1 00:25:22.622 --rc geninfo_all_blocks=1 00:25:22.622 --rc geninfo_unexecuted_blocks=1 00:25:22.622 00:25:22.622 ' 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:22.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.622 --rc genhtml_branch_coverage=1 00:25:22.622 --rc genhtml_function_coverage=1 00:25:22.622 --rc genhtml_legend=1 00:25:22.622 --rc geninfo_all_blocks=1 00:25:22.622 --rc geninfo_unexecuted_blocks=1 00:25:22.622 00:25:22.622 ' 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:22.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.622 --rc genhtml_branch_coverage=1 00:25:22.622 --rc genhtml_function_coverage=1 00:25:22.622 --rc genhtml_legend=1 00:25:22.622 --rc geninfo_all_blocks=1 00:25:22.622 --rc geninfo_unexecuted_blocks=1 00:25:22.622 00:25:22.622 ' 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:22.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.622 --rc genhtml_branch_coverage=1 00:25:22.622 --rc genhtml_function_coverage=1 00:25:22.622 --rc genhtml_legend=1 00:25:22.622 --rc geninfo_all_blocks=1 00:25:22.622 --rc geninfo_unexecuted_blocks=1 00:25:22.622 00:25:22.622 ' 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:22.622 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:22.623 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:22.623 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:22.623 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:22.623 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:22.623 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:22.623 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.623 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.623 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.623 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:22.623 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.623 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:22.623 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:22.623 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:22.623 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:22.623 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:22.623 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:22.623 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:22.623 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:22.623 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:22.623 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:22.623 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:22.623 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:22.623 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:22.623 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:22.623 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:22.623 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:22.623 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:22.623 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:22.623 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:22.623 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:22.623 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.623 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:22.623 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.623 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:22.623 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:22.623 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:22.623 03:36:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:27.900 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:27.900 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:27.900 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:27.900 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:27.900 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:27.900 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:27.900 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:27.900 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:27.900 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:27.900 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:27.900 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:27.900 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:27.900 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:27.900 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:27.900 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:27.900 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:27.900 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:27.900 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:27.900 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:27.900 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:27.900 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:27.900 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:27.900 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:27.900 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:27.900 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:27.900 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:27.900 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:27.900 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:27.900 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:27.900 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:27.900 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:27.900 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:27.900 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:27.900 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:27.900 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:27.900 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:27.900 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:27.901 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:27.901 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:27.901 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:27.901 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:27.901 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:27.901 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:27.901 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:27.901 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:27.901 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:27.901 03:36:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:27.901 Found net devices under 0000:af:00.0: cvl_0_0 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:27.901 Found net devices under 0000:af:00.1: cvl_0_1 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:27.901 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:28.160 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:28.160 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:28.160 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:28.160 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:28.160 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:28.160 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:28.160 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:28.160 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:28.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:28.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:25:28.160 00:25:28.160 --- 10.0.0.2 ping statistics --- 00:25:28.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.160 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:25:28.160 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:28.160 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:28.160 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:25:28.160 00:25:28.160 --- 10.0.0.1 ping statistics --- 00:25:28.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.160 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:25:28.160 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:28.160 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:25:28.160 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:28.160 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:28.160 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:28.160 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:28.160 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:28.160 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:28.160 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:28.420 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:28.420 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:28.420 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:28.420 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.420 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=2747711 00:25:28.420 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:28.420 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 2747711 00:25:28.420 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 2747711 ']' 00:25:28.420 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:28.420 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:28.420 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:28.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:28.420 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:28.420 03:36:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.420 [2024-12-13 03:36:29.482544] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:25:28.420 [2024-12-13 03:36:29.482651] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:28.420 [2024-12-13 03:36:29.598635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:28.679 [2024-12-13 03:36:29.708051] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:28.679 [2024-12-13 03:36:29.708096] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:28.679 [2024-12-13 03:36:29.708108] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:28.679 [2024-12-13 03:36:29.708118] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:28.679 [2024-12-13 03:36:29.708126] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:28.679 [2024-12-13 03:36:29.713948] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:28.679 [2024-12-13 03:36:29.713969] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:25:28.679 [2024-12-13 03:36:29.714041] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:28.679 [2024-12-13 03:36:29.714049] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:25:29.246 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:29.246 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:25:29.246 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:29.246 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:29.246 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.246 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:29.246 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:29.246 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.246 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.246 [2024-12-13 03:36:30.331676] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:29.246 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.246 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:29.246 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.246 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:29.246 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.246 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.246 Malloc1 00:25:29.246 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.246 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:29.246 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.246 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.246 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.246 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:29.246 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.246 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.246 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.246 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:29.246 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.246 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.246 [2024-12-13 03:36:30.446381] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:29.246 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.246 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.246 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:29.246 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.246 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.505 Malloc2 00:25:29.505 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.505 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:29.505 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.505 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.505 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.505 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:29.505 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.505 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.505 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.505 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:29.505 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.505 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.505 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.505 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.505 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:29.505 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.505 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.505 Malloc3 00:25:29.505 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.505 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:29.505 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.505 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.506 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.506 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:29.506 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.506 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.506 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.506 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:29.506 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.506 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.506 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.506 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.506 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:29.506 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.506 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.765 Malloc4 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.765 Malloc5 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.765 Malloc6 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.765 03:36:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.024 Malloc7 00:25:30.024 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.024 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:30.024 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.024 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.024 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.024 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:30.024 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.025 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.025 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.025 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:30.025 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.025 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.025 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.025 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:30.025 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:30.025 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.025 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.025 Malloc8 00:25:30.025 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.025 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:30.025 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.025 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.025 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.025 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:30.025 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.025 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.025 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.025 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:30.025 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.025 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.025 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.025 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:30.025 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:30.025 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.025 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.284 Malloc9 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.284 Malloc10 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.284 Malloc11 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.284 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.543 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.543 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:30.543 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:30.543 03:36:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:31.479 03:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:31.479 03:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:31.479 03:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:31.479 03:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:31.479 03:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:34.011 03:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:34.011 03:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:34.011 03:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:25:34.011 03:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:34.011 03:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:34.011 03:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:34.011 03:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:34.011 03:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:34.577 03:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:34.577 03:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:34.577 03:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:34.577 03:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:34.578 03:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:37.110 03:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:37.110 03:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:37.110 03:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:25:37.110 03:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:37.110 03:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:37.110 03:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:37.110 03:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:37.110 03:36:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:38.046 03:36:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:38.046 03:36:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:38.046 03:36:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:38.046 03:36:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:38.046 03:36:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:39.948 03:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:39.948 03:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:39.948 03:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:25:39.948 03:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:39.948 03:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:39.948 03:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:39.948 03:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:39.948 03:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:41.324 03:36:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:41.324 03:36:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:41.324 03:36:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:41.324 03:36:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:41.324 03:36:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:43.227 03:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:43.227 03:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:43.227 03:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:25:43.227 03:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:43.227 03:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:43.227 03:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:43.227 03:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:43.227 03:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:44.604 03:36:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:44.604 03:36:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:44.604 03:36:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:44.604 03:36:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:44.604 03:36:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:46.506 03:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:46.506 03:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:46.506 03:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:25:46.506 03:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:46.506 03:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:46.506 03:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:46.506 03:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:46.506 03:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:47.883 03:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:47.883 03:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:47.883 03:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:47.883 03:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:47.883 03:36:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:49.786 03:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:49.786 03:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:49.786 03:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:25:49.786 03:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:49.786 03:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:49.786 03:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:49.786 03:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.786 03:36:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:51.162 03:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:51.162 03:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:51.162 03:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:51.162 03:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:51.162 03:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:53.064 03:36:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:53.064 03:36:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:53.064 03:36:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:25:53.064 03:36:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:53.064 03:36:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:53.064 03:36:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:53.064 03:36:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:53.064 03:36:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:54.452 03:36:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:54.452 03:36:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:54.452 03:36:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:54.452 03:36:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:54.452 03:36:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:56.983 03:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:56.983 03:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:56.983 03:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:25:56.983 03:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:56.983 03:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:56.983 03:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:56.983 03:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:56.983 03:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:57.919 03:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:57.919 03:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:57.919 03:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:57.919 03:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:57.919 03:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:59.822 03:37:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:59.822 03:37:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:59.822 03:37:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:25:59.822 03:37:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:59.822 03:37:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:59.822 03:37:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:59.822 03:37:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:59.822 03:37:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:01.724 03:37:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:01.724 03:37:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:01.724 03:37:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:01.724 03:37:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:01.724 03:37:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:03.630 03:37:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:03.630 03:37:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:03.630 03:37:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:26:03.630 03:37:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:03.630 03:37:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:03.630 03:37:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:03.630 03:37:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:03.630 03:37:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:05.006 03:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:05.006 03:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:05.006 03:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:05.006 03:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:05.006 03:37:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:07.049 03:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:07.049 03:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:07.049 03:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:26:07.049 03:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:07.049 03:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:07.049 03:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:07.049 03:37:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:07.049 [global] 00:26:07.049 thread=1 00:26:07.049 invalidate=1 00:26:07.049 rw=read 00:26:07.049 time_based=1 00:26:07.049 runtime=10 00:26:07.049 ioengine=libaio 00:26:07.049 direct=1 00:26:07.049 bs=262144 00:26:07.049 iodepth=64 00:26:07.049 norandommap=1 00:26:07.049 numjobs=1 00:26:07.049 00:26:07.049 [job0] 00:26:07.049 filename=/dev/nvme0n1 00:26:07.049 [job1] 00:26:07.049 filename=/dev/nvme10n1 00:26:07.049 [job2] 00:26:07.049 filename=/dev/nvme1n1 00:26:07.049 [job3] 00:26:07.049 filename=/dev/nvme2n1 00:26:07.049 [job4] 00:26:07.049 filename=/dev/nvme3n1 00:26:07.049 [job5] 00:26:07.050 filename=/dev/nvme4n1 00:26:07.050 [job6] 00:26:07.050 filename=/dev/nvme5n1 00:26:07.050 [job7] 00:26:07.050 filename=/dev/nvme6n1 00:26:07.050 [job8] 00:26:07.050 filename=/dev/nvme7n1 00:26:07.050 [job9] 00:26:07.050 filename=/dev/nvme8n1 00:26:07.050 [job10] 00:26:07.050 filename=/dev/nvme9n1 00:26:07.050 Could not set queue depth (nvme0n1) 00:26:07.050 Could not set queue depth (nvme10n1) 00:26:07.050 Could not set queue depth (nvme1n1) 00:26:07.050 Could not set queue depth (nvme2n1) 00:26:07.050 Could not set queue depth (nvme3n1) 00:26:07.050 Could not set queue depth (nvme4n1) 00:26:07.050 Could not set queue depth (nvme5n1) 00:26:07.050 Could not set queue depth (nvme6n1) 00:26:07.050 Could not set queue depth (nvme7n1) 00:26:07.050 Could not set queue depth (nvme8n1) 00:26:07.050 Could not set queue depth (nvme9n1) 00:26:07.308 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.308 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.308 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.308 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.308 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.308 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.308 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.308 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.308 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.308 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.308 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.308 fio-3.35 00:26:07.308 Starting 11 threads 00:26:19.520 00:26:19.520 job0: (groupid=0, jobs=1): err= 0: pid=2754232: Fri Dec 13 03:37:18 2024 00:26:19.520 read: IOPS=237, BW=59.4MiB/s (62.3MB/s)(601MiB/10112msec) 00:26:19.520 slat (usec): min=17, max=216254, avg=2371.17, stdev=11481.71 00:26:19.520 clat (msec): min=2, max=808, avg=266.61, stdev=185.90 00:26:19.520 lat (msec): min=2, max=808, avg=268.98, stdev=187.50 00:26:19.520 clat percentiles (msec): 00:26:19.520 | 1.00th=[ 12], 5.00th=[ 14], 10.00th=[ 22], 20.00th=[ 85], 00:26:19.520 | 30.00th=[ 167], 40.00th=[ 190], 50.00th=[ 228], 60.00th=[ 292], 00:26:19.520 | 70.00th=[ 359], 80.00th=[ 439], 90.00th=[ 523], 95.00th=[ 625], 00:26:19.520 | 99.00th=[ 726], 99.50th=[ 735], 99.90th=[ 768], 99.95th=[ 768], 00:26:19.520 | 99.99th=[ 810] 00:26:19.520 bw ( KiB/s): min=27648, max=148992, per=6.69%, avg=59878.40, stdev=33694.47, samples=20 00:26:19.520 iops : min= 108, max= 582, avg=233.90, stdev=131.62, samples=20 00:26:19.520 lat (msec) : 4=0.17%, 10=0.67%, 20=8.49%, 50=5.74%, 100=6.49% 00:26:19.520 lat (msec) : 250=31.75%, 500=35.37%, 750=11.11%, 1000=0.21% 00:26:19.520 cpu : usr=0.09%, sys=1.08%, ctx=700, majf=0, minf=4097 00:26:19.520 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:26:19.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:19.521 issued rwts: total=2403,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.521 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:19.521 job1: (groupid=0, jobs=1): err= 0: pid=2754236: Fri Dec 13 03:37:18 2024 00:26:19.521 read: IOPS=294, BW=73.5MiB/s (77.1MB/s)(744MiB/10109msec) 00:26:19.521 slat (usec): min=9, max=382283, avg=2396.74, stdev=14781.53 00:26:19.521 clat (msec): min=5, max=1159, avg=214.96, stdev=194.03 00:26:19.521 lat (msec): min=5, max=1159, avg=217.36, stdev=195.18 00:26:19.521 clat percentiles (msec): 00:26:19.521 | 1.00th=[ 19], 5.00th=[ 33], 10.00th=[ 39], 20.00th=[ 69], 00:26:19.521 | 30.00th=[ 93], 40.00th=[ 130], 50.00th=[ 157], 60.00th=[ 194], 00:26:19.521 | 70.00th=[ 234], 80.00th=[ 338], 90.00th=[ 485], 95.00th=[ 575], 00:26:19.521 | 99.00th=[ 1083], 99.50th=[ 1150], 99.90th=[ 1167], 99.95th=[ 1167], 00:26:19.521 | 99.99th=[ 1167] 00:26:19.521 bw ( KiB/s): min= 7680, max=210432, per=8.32%, avg=74496.00, stdev=46981.85, samples=20 00:26:19.521 iops : min= 30, max= 822, avg=291.00, stdev=183.52, samples=20 00:26:19.521 lat (msec) : 10=0.17%, 20=1.68%, 50=10.63%, 100=19.03%, 250=40.85% 00:26:19.521 lat (msec) : 500=18.86%, 750=7.13%, 1000=0.13%, 2000=1.51% 00:26:19.521 cpu : usr=0.13%, sys=1.10%, ctx=478, majf=0, minf=4097 00:26:19.521 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:19.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.521 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:19.521 issued rwts: total=2974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.521 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:19.521 job2: (groupid=0, jobs=1): err= 0: pid=2754244: Fri Dec 13 03:37:18 2024 00:26:19.521 read: IOPS=400, BW=100MiB/s (105MB/s)(1016MiB/10140msec) 00:26:19.521 slat (usec): min=15, max=492938, avg=1507.56, stdev=12910.60 00:26:19.521 clat (usec): min=1552, max=842330, avg=158027.37, stdev=180556.25 00:26:19.521 lat (usec): min=1584, max=858537, avg=159534.93, stdev=182458.24 00:26:19.521 clat percentiles (msec): 00:26:19.521 | 1.00th=[ 8], 5.00th=[ 10], 10.00th=[ 17], 20.00th=[ 29], 00:26:19.521 | 30.00th=[ 44], 40.00th=[ 56], 50.00th=[ 67], 60.00th=[ 95], 00:26:19.521 | 70.00th=[ 138], 80.00th=[ 372], 90.00th=[ 464], 95.00th=[ 518], 00:26:19.521 | 99.00th=[ 634], 99.50th=[ 827], 99.90th=[ 844], 99.95th=[ 844], 00:26:19.521 | 99.99th=[ 844] 00:26:19.521 bw ( KiB/s): min= 6656, max=310272, per=11.43%, avg=102374.40, stdev=90337.63, samples=20 00:26:19.521 iops : min= 26, max= 1212, avg=399.90, stdev=352.88, samples=20 00:26:19.521 lat (msec) : 2=0.02%, 4=0.22%, 10=5.09%, 20=7.43%, 50=23.04% 00:26:19.521 lat (msec) : 100=27.00%, 250=13.09%, 500=17.33%, 750=5.88%, 1000=0.89% 00:26:19.521 cpu : usr=0.18%, sys=1.35%, ctx=1242, majf=0, minf=4097 00:26:19.521 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:19.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.521 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:19.521 issued rwts: total=4063,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.521 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:19.521 job3: (groupid=0, jobs=1): err= 0: pid=2754249: Fri Dec 13 03:37:18 2024 00:26:19.521 read: IOPS=207, BW=51.9MiB/s (54.4MB/s)(526MiB/10138msec) 00:26:19.521 slat (usec): min=17, max=328473, avg=2780.59, stdev=16703.34 00:26:19.521 clat (msec): min=2, max=1080, avg=305.21, stdev=216.56 00:26:19.521 lat (msec): min=2, max=1080, avg=307.99, stdev=218.74 00:26:19.521 clat percentiles (msec): 00:26:19.521 | 1.00th=[ 12], 5.00th=[ 19], 10.00th=[ 34], 20.00th=[ 48], 00:26:19.521 | 30.00th=[ 136], 40.00th=[ 241], 50.00th=[ 321], 60.00th=[ 388], 00:26:19.521 | 70.00th=[ 439], 80.00th=[ 481], 90.00th=[ 550], 95.00th=[ 642], 00:26:19.521 | 99.00th=[ 961], 99.50th=[ 969], 99.90th=[ 1036], 99.95th=[ 1036], 00:26:19.521 | 99.99th=[ 1083] 00:26:19.521 bw ( KiB/s): min=19968, max=144384, per=5.83%, avg=52227.05, stdev=34380.94, samples=20 00:26:19.521 iops : min= 78, max= 564, avg=204.00, stdev=134.31, samples=20 00:26:19.521 lat (msec) : 4=0.14%, 10=0.62%, 20=6.61%, 50=13.69%, 100=4.71% 00:26:19.521 lat (msec) : 250=15.73%, 500=41.21%, 750=14.35%, 1000=2.57%, 2000=0.38% 00:26:19.521 cpu : usr=0.12%, sys=0.77%, ctx=650, majf=0, minf=4097 00:26:19.521 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:26:19.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.521 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:19.521 issued rwts: total=2104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.521 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:19.521 job4: (groupid=0, jobs=1): err= 0: pid=2754252: Fri Dec 13 03:37:18 2024 00:26:19.521 read: IOPS=527, BW=132MiB/s (138MB/s)(1338MiB/10140msec) 00:26:19.521 slat (usec): min=15, max=403236, avg=1171.81, stdev=9147.87 00:26:19.521 clat (usec): min=1449, max=809311, avg=119946.96, stdev=143037.31 00:26:19.521 lat (usec): min=1623, max=814780, avg=121118.77, stdev=144076.60 00:26:19.521 clat percentiles (msec): 00:26:19.521 | 1.00th=[ 11], 5.00th=[ 17], 10.00th=[ 30], 20.00th=[ 34], 00:26:19.521 | 30.00th=[ 37], 40.00th=[ 42], 50.00th=[ 53], 60.00th=[ 78], 00:26:19.521 | 70.00th=[ 108], 80.00th=[ 176], 90.00th=[ 376], 95.00th=[ 447], 00:26:19.521 | 99.00th=[ 642], 99.50th=[ 760], 99.90th=[ 760], 99.95th=[ 810], 00:26:19.521 | 99.99th=[ 810] 00:26:19.521 bw ( KiB/s): min=24576, max=341504, per=15.12%, avg=135402.50, stdev=112892.77, samples=20 00:26:19.521 iops : min= 96, max= 1334, avg=528.90, stdev=441.00, samples=20 00:26:19.521 lat (msec) : 2=0.06%, 4=0.15%, 10=0.65%, 20=4.63%, 50=43.39% 00:26:19.521 lat (msec) : 100=19.69%, 250=17.75%, 500=11.12%, 750=1.81%, 1000=0.75% 00:26:19.521 cpu : usr=0.19%, sys=2.01%, ctx=1350, majf=0, minf=4097 00:26:19.521 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:19.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.521 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:19.521 issued rwts: total=5352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.521 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:19.521 job5: (groupid=0, jobs=1): err= 0: pid=2754262: Fri Dec 13 03:37:18 2024 00:26:19.521 read: IOPS=232, BW=58.2MiB/s (61.0MB/s)(588MiB/10099msec) 00:26:19.521 slat (usec): min=12, max=191676, avg=1620.42, stdev=10796.16 00:26:19.521 clat (usec): min=712, max=943746, avg=272994.97, stdev=200377.02 00:26:19.521 lat (usec): min=737, max=943778, avg=274615.39, stdev=201544.28 00:26:19.521 clat percentiles (usec): 00:26:19.521 | 1.00th=[ 1123], 5.00th=[ 20055], 10.00th=[ 39584], 20.00th=[ 63177], 00:26:19.521 | 30.00th=[115868], 40.00th=[177210], 50.00th=[250610], 60.00th=[337642], 00:26:19.521 | 70.00th=[392168], 80.00th=[446694], 90.00th=[526386], 95.00th=[616563], 00:26:19.521 | 99.00th=[817890], 99.50th=[834667], 99.90th=[884999], 99.95th=[884999], 00:26:19.521 | 99.99th=[943719] 00:26:19.521 bw ( KiB/s): min=19456, max=166912, per=6.54%, avg=58547.20, stdev=36011.01, samples=20 00:26:19.521 iops : min= 76, max= 652, avg=228.70, stdev=140.67, samples=20 00:26:19.521 lat (usec) : 750=0.09%, 1000=0.38% 00:26:19.521 lat (msec) : 2=1.45%, 4=0.26%, 10=1.28%, 20=1.53%, 50=9.57% 00:26:19.521 lat (msec) : 100=12.04%, 250=23.48%, 500=37.77%, 750=9.66%, 1000=2.51% 00:26:19.521 cpu : usr=0.12%, sys=0.95%, ctx=718, majf=0, minf=4097 00:26:19.521 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:26:19.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.521 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:19.521 issued rwts: total=2351,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.521 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:19.521 job6: (groupid=0, jobs=1): err= 0: pid=2754266: Fri Dec 13 03:37:18 2024 00:26:19.521 read: IOPS=312, BW=78.0MiB/s (81.8MB/s)(792MiB/10143msec) 00:26:19.521 slat (usec): min=15, max=496738, avg=1834.60, stdev=14744.60 00:26:19.521 clat (usec): min=1569, max=891749, avg=202946.60, stdev=200943.17 00:26:19.521 lat (usec): min=1618, max=891777, avg=204781.21, stdev=202731.50 00:26:19.521 clat percentiles (msec): 00:26:19.521 | 1.00th=[ 4], 5.00th=[ 19], 10.00th=[ 30], 20.00th=[ 44], 00:26:19.521 | 30.00th=[ 51], 40.00th=[ 77], 50.00th=[ 121], 60.00th=[ 163], 00:26:19.521 | 70.00th=[ 253], 80.00th=[ 388], 90.00th=[ 531], 95.00th=[ 600], 00:26:19.521 | 99.00th=[ 852], 99.50th=[ 869], 99.90th=[ 885], 99.95th=[ 894], 00:26:19.521 | 99.99th=[ 894] 00:26:19.521 bw ( KiB/s): min=25600, max=203158, per=8.87%, avg=79431.50, stdev=47132.76, samples=20 00:26:19.521 iops : min= 100, max= 793, avg=310.25, stdev=184.03, samples=20 00:26:19.521 lat (msec) : 2=0.13%, 4=1.07%, 10=0.92%, 20=3.95%, 50=23.69% 00:26:19.521 lat (msec) : 100=15.82%, 250=24.19%, 500=18.04%, 750=10.87%, 1000=1.33% 00:26:19.521 cpu : usr=0.11%, sys=1.24%, ctx=1062, majf=0, minf=4097 00:26:19.521 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:19.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.521 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:19.521 issued rwts: total=3166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.521 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:19.521 job7: (groupid=0, jobs=1): err= 0: pid=2754273: Fri Dec 13 03:37:18 2024 00:26:19.521 read: IOPS=276, BW=69.2MiB/s (72.5MB/s)(702MiB/10148msec) 00:26:19.521 slat (usec): min=16, max=223143, avg=1679.34, stdev=13022.92 00:26:19.521 clat (msec): min=3, max=1024, avg=229.34, stdev=211.67 00:26:19.521 lat (msec): min=4, max=1024, avg=231.01, stdev=213.50 00:26:19.521 clat percentiles (msec): 00:26:19.521 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 14], 20.00th=[ 35], 00:26:19.521 | 30.00th=[ 57], 40.00th=[ 84], 50.00th=[ 150], 60.00th=[ 257], 00:26:19.521 | 70.00th=[ 380], 80.00th=[ 430], 90.00th=[ 523], 95.00th=[ 617], 00:26:19.521 | 99.00th=[ 827], 99.50th=[ 885], 99.90th=[ 1020], 99.95th=[ 1020], 00:26:19.521 | 99.99th=[ 1028] 00:26:19.521 bw ( KiB/s): min=16384, max=291328, per=7.85%, avg=70272.00, stdev=69294.12, samples=20 00:26:19.521 iops : min= 64, max= 1138, avg=274.50, stdev=270.68, samples=20 00:26:19.521 lat (msec) : 4=0.04%, 10=7.98%, 20=3.42%, 50=16.35%, 100=14.71% 00:26:19.521 lat (msec) : 250=17.13%, 500=30.02%, 750=8.87%, 1000=1.39%, 2000=0.11% 00:26:19.521 cpu : usr=0.14%, sys=1.07%, ctx=676, majf=0, minf=4097 00:26:19.522 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:26:19.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:19.522 issued rwts: total=2808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.522 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:19.522 job8: (groupid=0, jobs=1): err= 0: pid=2754296: Fri Dec 13 03:37:18 2024 00:26:19.522 read: IOPS=519, BW=130MiB/s (136MB/s)(1301MiB/10014msec) 00:26:19.522 slat (usec): min=15, max=223396, avg=1271.39, stdev=7963.00 00:26:19.522 clat (msec): min=8, max=768, avg=121.73, stdev=153.35 00:26:19.522 lat (msec): min=8, max=768, avg=123.00, stdev=154.50 00:26:19.522 clat percentiles (msec): 00:26:19.522 | 1.00th=[ 24], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 33], 00:26:19.522 | 30.00th=[ 35], 40.00th=[ 37], 50.00th=[ 50], 60.00th=[ 64], 00:26:19.522 | 70.00th=[ 89], 80.00th=[ 182], 90.00th=[ 380], 95.00th=[ 477], 00:26:19.522 | 99.00th=[ 667], 99.50th=[ 693], 99.90th=[ 751], 99.95th=[ 751], 00:26:19.522 | 99.99th=[ 768] 00:26:19.522 bw ( KiB/s): min=25088, max=429568, per=14.70%, avg=131635.20, stdev=133743.40, samples=20 00:26:19.522 iops : min= 98, max= 1678, avg=514.20, stdev=522.44, samples=20 00:26:19.522 lat (msec) : 10=0.08%, 20=0.56%, 50=49.78%, 100=22.29%, 250=10.62% 00:26:19.522 lat (msec) : 500=12.53%, 750=4.03%, 1000=0.12% 00:26:19.522 cpu : usr=0.14%, sys=2.03%, ctx=821, majf=0, minf=3722 00:26:19.522 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:19.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:19.522 issued rwts: total=5205,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.522 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:19.522 job9: (groupid=0, jobs=1): err= 0: pid=2754307: Fri Dec 13 03:37:18 2024 00:26:19.522 read: IOPS=188, BW=47.1MiB/s (49.4MB/s)(477MiB/10136msec) 00:26:19.522 slat (usec): min=15, max=488061, avg=3320.93, stdev=21667.11 00:26:19.522 clat (usec): min=1666, max=1016.2k, avg=336134.62, stdev=203722.24 00:26:19.522 lat (usec): min=1721, max=1086.5k, avg=339455.55, stdev=205585.26 00:26:19.522 clat percentiles (msec): 00:26:19.522 | 1.00th=[ 8], 5.00th=[ 10], 10.00th=[ 73], 20.00th=[ 150], 00:26:19.522 | 30.00th=[ 224], 40.00th=[ 271], 50.00th=[ 321], 60.00th=[ 376], 00:26:19.522 | 70.00th=[ 430], 80.00th=[ 510], 90.00th=[ 600], 95.00th=[ 659], 00:26:19.522 | 99.00th=[ 1003], 99.50th=[ 1003], 99.90th=[ 1011], 99.95th=[ 1020], 00:26:19.522 | 99.99th=[ 1020] 00:26:19.522 bw ( KiB/s): min=12800, max=90624, per=5.28%, avg=47260.80, stdev=20237.69, samples=20 00:26:19.522 iops : min= 50, max= 354, avg=184.60, stdev=79.06, samples=20 00:26:19.522 lat (msec) : 2=0.16%, 4=0.26%, 10=4.82%, 20=0.31%, 50=2.10% 00:26:19.522 lat (msec) : 100=6.86%, 250=18.54%, 500=45.31%, 750=19.64%, 1000=0.26% 00:26:19.522 lat (msec) : 2000=1.73% 00:26:19.522 cpu : usr=0.07%, sys=0.70%, ctx=383, majf=0, minf=4097 00:26:19.522 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:26:19.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.522 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:19.522 issued rwts: total=1909,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.522 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:19.522 job10: (groupid=0, jobs=1): err= 0: pid=2754315: Fri Dec 13 03:37:18 2024 00:26:19.522 read: IOPS=312, BW=78.2MiB/s (82.0MB/s)(791MiB/10112msec) 00:26:19.522 slat (usec): min=15, max=152290, avg=2725.52, stdev=11898.18 00:26:19.522 clat (msec): min=12, max=838, avg=201.54, stdev=161.82 00:26:19.522 lat (msec): min=12, max=838, avg=204.26, stdev=164.04 00:26:19.522 clat percentiles (msec): 00:26:19.522 | 1.00th=[ 17], 5.00th=[ 23], 10.00th=[ 29], 20.00th=[ 48], 00:26:19.522 | 30.00th=[ 82], 40.00th=[ 125], 50.00th=[ 165], 60.00th=[ 218], 00:26:19.522 | 70.00th=[ 264], 80.00th=[ 321], 90.00th=[ 430], 95.00th=[ 510], 00:26:19.522 | 99.00th=[ 709], 99.50th=[ 760], 99.90th=[ 802], 99.95th=[ 835], 00:26:19.522 | 99.99th=[ 835] 00:26:19.522 bw ( KiB/s): min=21504, max=381952, per=8.86%, avg=79385.60, stdev=79257.53, samples=20 00:26:19.522 iops : min= 84, max= 1492, avg=310.10, stdev=309.60, samples=20 00:26:19.522 lat (msec) : 20=3.29%, 50=17.95%, 100=13.87%, 250=32.23%, 500=27.17% 00:26:19.522 lat (msec) : 750=4.93%, 1000=0.57% 00:26:19.522 cpu : usr=0.15%, sys=1.29%, ctx=436, majf=0, minf=4097 00:26:19.522 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:19.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:19.522 issued rwts: total=3165,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.522 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:19.522 00:26:19.522 Run status group 0 (all jobs): 00:26:19.522 READ: bw=875MiB/s (917MB/s), 47.1MiB/s-132MiB/s (49.4MB/s-138MB/s), io=8875MiB (9306MB), run=10014-10148msec 00:26:19.522 00:26:19.522 Disk stats (read/write): 00:26:19.522 nvme0n1: ios=4657/0, merge=0/0, ticks=1233595/0, in_queue=1233595, util=97.29% 00:26:19.522 nvme10n1: ios=5772/0, merge=0/0, ticks=1235620/0, in_queue=1235620, util=97.45% 00:26:19.522 nvme1n1: ios=7990/0, merge=0/0, ticks=1221854/0, in_queue=1221854, util=97.76% 00:26:19.522 nvme2n1: ios=4059/0, merge=0/0, ticks=1220786/0, in_queue=1220786, util=97.88% 00:26:19.522 nvme3n1: ios=10568/0, merge=0/0, ticks=1229116/0, in_queue=1229116, util=97.95% 00:26:19.522 nvme4n1: ios=4551/0, merge=0/0, ticks=1228940/0, in_queue=1228940, util=98.31% 00:26:19.522 nvme5n1: ios=6168/0, merge=0/0, ticks=1220308/0, in_queue=1220308, util=98.45% 00:26:19.522 nvme6n1: ios=5449/0, merge=0/0, ticks=1232465/0, in_queue=1232465, util=98.59% 00:26:19.522 nvme7n1: ios=10065/0, merge=0/0, ticks=1246272/0, in_queue=1246272, util=98.97% 00:26:19.522 nvme8n1: ios=3648/0, merge=0/0, ticks=1224007/0, in_queue=1224007, util=99.11% 00:26:19.522 nvme9n1: ios=6172/0, merge=0/0, ticks=1232817/0, in_queue=1232817, util=99.24% 00:26:19.522 03:37:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:19.522 [global] 00:26:19.522 thread=1 00:26:19.522 invalidate=1 00:26:19.522 rw=randwrite 00:26:19.522 time_based=1 00:26:19.522 runtime=10 00:26:19.522 ioengine=libaio 00:26:19.522 direct=1 00:26:19.522 bs=262144 00:26:19.522 iodepth=64 00:26:19.522 norandommap=1 00:26:19.522 numjobs=1 00:26:19.522 00:26:19.522 [job0] 00:26:19.522 filename=/dev/nvme0n1 00:26:19.522 [job1] 00:26:19.522 filename=/dev/nvme10n1 00:26:19.522 [job2] 00:26:19.522 filename=/dev/nvme1n1 00:26:19.522 [job3] 00:26:19.522 filename=/dev/nvme2n1 00:26:19.522 [job4] 00:26:19.522 filename=/dev/nvme3n1 00:26:19.522 [job5] 00:26:19.522 filename=/dev/nvme4n1 00:26:19.522 [job6] 00:26:19.522 filename=/dev/nvme5n1 00:26:19.522 [job7] 00:26:19.522 filename=/dev/nvme6n1 00:26:19.522 [job8] 00:26:19.522 filename=/dev/nvme7n1 00:26:19.522 [job9] 00:26:19.522 filename=/dev/nvme8n1 00:26:19.522 [job10] 00:26:19.522 filename=/dev/nvme9n1 00:26:19.522 Could not set queue depth (nvme0n1) 00:26:19.522 Could not set queue depth (nvme10n1) 00:26:19.522 Could not set queue depth (nvme1n1) 00:26:19.522 Could not set queue depth (nvme2n1) 00:26:19.522 Could not set queue depth (nvme3n1) 00:26:19.522 Could not set queue depth (nvme4n1) 00:26:19.522 Could not set queue depth (nvme5n1) 00:26:19.522 Could not set queue depth (nvme6n1) 00:26:19.522 Could not set queue depth (nvme7n1) 00:26:19.522 Could not set queue depth (nvme8n1) 00:26:19.522 Could not set queue depth (nvme9n1) 00:26:19.522 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:19.522 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:19.522 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:19.522 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:19.522 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:19.522 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:19.522 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:19.522 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:19.522 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:19.522 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:19.522 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:19.522 fio-3.35 00:26:19.522 Starting 11 threads 00:26:29.504 00:26:29.504 job0: (groupid=0, jobs=1): err= 0: pid=2755272: Fri Dec 13 03:37:30 2024 00:26:29.504 write: IOPS=378, BW=94.6MiB/s (99.2MB/s)(971MiB/10262msec); 0 zone resets 00:26:29.504 slat (usec): min=25, max=97074, avg=1995.57, stdev=5166.50 00:26:29.504 clat (usec): min=1027, max=615182, avg=167088.30, stdev=94595.11 00:26:29.504 lat (usec): min=1067, max=615231, avg=169083.87, stdev=95366.08 00:26:29.504 clat percentiles (msec): 00:26:29.504 | 1.00th=[ 15], 5.00th=[ 66], 10.00th=[ 90], 20.00th=[ 96], 00:26:29.504 | 30.00th=[ 106], 40.00th=[ 112], 50.00th=[ 138], 60.00th=[ 163], 00:26:29.504 | 70.00th=[ 197], 80.00th=[ 245], 90.00th=[ 288], 95.00th=[ 355], 00:26:29.504 | 99.00th=[ 464], 99.50th=[ 542], 99.90th=[ 600], 99.95th=[ 609], 00:26:29.504 | 99.99th=[ 617] 00:26:29.504 bw ( KiB/s): min=36864, max=172032, per=10.10%, avg=97740.80, stdev=39404.54, samples=20 00:26:29.504 iops : min= 144, max= 672, avg=381.80, stdev=153.92, samples=20 00:26:29.504 lat (msec) : 2=0.18%, 4=0.08%, 10=0.18%, 20=1.24%, 50=2.55% 00:26:29.504 lat (msec) : 100=20.12%, 250=57.03%, 500=17.95%, 750=0.67% 00:26:29.504 cpu : usr=0.94%, sys=1.14%, ctx=1636, majf=0, minf=1 00:26:29.504 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:29.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.504 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:29.504 issued rwts: total=0,3882,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.504 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:29.504 job1: (groupid=0, jobs=1): err= 0: pid=2755284: Fri Dec 13 03:37:30 2024 00:26:29.504 write: IOPS=363, BW=90.9MiB/s (95.4MB/s)(917MiB/10086msec); 0 zone resets 00:26:29.504 slat (usec): min=24, max=31262, avg=2258.33, stdev=5289.65 00:26:29.504 clat (msec): min=2, max=458, avg=173.61, stdev=96.70 00:26:29.504 lat (msec): min=3, max=463, avg=175.87, stdev=97.90 00:26:29.504 clat percentiles (msec): 00:26:29.504 | 1.00th=[ 30], 5.00th=[ 56], 10.00th=[ 68], 20.00th=[ 90], 00:26:29.504 | 30.00th=[ 95], 40.00th=[ 115], 50.00th=[ 159], 60.00th=[ 194], 00:26:29.504 | 70.00th=[ 220], 80.00th=[ 255], 90.00th=[ 330], 95.00th=[ 359], 00:26:29.504 | 99.00th=[ 430], 99.50th=[ 443], 99.90th=[ 456], 99.95th=[ 456], 00:26:29.504 | 99.99th=[ 460] 00:26:29.504 bw ( KiB/s): min=45056, max=238592, per=9.54%, avg=92320.55, stdev=48661.44, samples=20 00:26:29.504 iops : min= 176, max= 932, avg=360.60, stdev=190.10, samples=20 00:26:29.504 lat (msec) : 4=0.03%, 10=0.25%, 20=0.27%, 50=2.29%, 100=32.62% 00:26:29.504 lat (msec) : 250=42.95%, 500=21.59% 00:26:29.504 cpu : usr=0.80%, sys=1.06%, ctx=1445, majf=0, minf=1 00:26:29.504 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:26:29.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.504 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:29.504 issued rwts: total=0,3669,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.504 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:29.504 job2: (groupid=0, jobs=1): err= 0: pid=2755285: Fri Dec 13 03:37:30 2024 00:26:29.504 write: IOPS=279, BW=69.9MiB/s (73.3MB/s)(717MiB/10257msec); 0 zone resets 00:26:29.504 slat (usec): min=22, max=154489, avg=3020.72, stdev=7601.58 00:26:29.504 clat (usec): min=1668, max=638606, avg=225656.02, stdev=98716.78 00:26:29.504 lat (usec): min=1729, max=638659, avg=228676.74, stdev=99386.00 00:26:29.504 clat percentiles (msec): 00:26:29.504 | 1.00th=[ 15], 5.00th=[ 96], 10.00th=[ 108], 20.00th=[ 146], 00:26:29.504 | 30.00th=[ 167], 40.00th=[ 197], 50.00th=[ 218], 60.00th=[ 234], 00:26:29.504 | 70.00th=[ 255], 80.00th=[ 305], 90.00th=[ 368], 95.00th=[ 401], 00:26:29.504 | 99.00th=[ 535], 99.50th=[ 558], 99.90th=[ 642], 99.95th=[ 642], 00:26:29.504 | 99.99th=[ 642] 00:26:29.504 bw ( KiB/s): min=40960, max=129024, per=7.42%, avg=71787.30, stdev=24946.63, samples=20 00:26:29.504 iops : min= 160, max= 504, avg=280.40, stdev=97.47, samples=20 00:26:29.504 lat (msec) : 2=0.03%, 4=0.31%, 10=0.42%, 20=0.28%, 50=0.42% 00:26:29.504 lat (msec) : 100=5.93%, 250=61.05%, 500=29.99%, 750=1.57% 00:26:29.504 cpu : usr=0.77%, sys=0.92%, ctx=974, majf=0, minf=1 00:26:29.504 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:26:29.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.504 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:29.504 issued rwts: total=0,2868,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.504 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:29.504 job3: (groupid=0, jobs=1): err= 0: pid=2755286: Fri Dec 13 03:37:30 2024 00:26:29.504 write: IOPS=373, BW=93.5MiB/s (98.0MB/s)(940MiB/10051msec); 0 zone resets 00:26:29.504 slat (usec): min=18, max=80223, avg=1664.41, stdev=5419.97 00:26:29.504 clat (usec): min=1525, max=477778, avg=169411.55, stdev=122514.65 00:26:29.504 lat (usec): min=1580, max=482803, avg=171075.96, stdev=123817.20 00:26:29.504 clat percentiles (msec): 00:26:29.504 | 1.00th=[ 5], 5.00th=[ 14], 10.00th=[ 35], 20.00th=[ 58], 00:26:29.504 | 30.00th=[ 68], 40.00th=[ 101], 50.00th=[ 144], 60.00th=[ 171], 00:26:29.504 | 70.00th=[ 232], 80.00th=[ 313], 90.00th=[ 359], 95.00th=[ 384], 00:26:29.504 | 99.00th=[ 435], 99.50th=[ 456], 99.90th=[ 477], 99.95th=[ 477], 00:26:29.504 | 99.99th=[ 477] 00:26:29.504 bw ( KiB/s): min=40960, max=264192, per=9.78%, avg=94625.90, stdev=54881.82, samples=20 00:26:29.504 iops : min= 160, max= 1032, avg=369.60, stdev=214.39, samples=20 00:26:29.504 lat (msec) : 2=0.16%, 4=0.48%, 10=2.95%, 20=3.70%, 50=4.97% 00:26:29.504 lat (msec) : 100=27.69%, 250=32.22%, 500=27.83% 00:26:29.504 cpu : usr=0.90%, sys=1.21%, ctx=2111, majf=0, minf=2 00:26:29.504 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:26:29.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.504 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:29.504 issued rwts: total=0,3759,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.504 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:29.504 job4: (groupid=0, jobs=1): err= 0: pid=2755287: Fri Dec 13 03:37:30 2024 00:26:29.504 write: IOPS=453, BW=113MiB/s (119MB/s)(1141MiB/10051msec); 0 zone resets 00:26:29.504 slat (usec): min=21, max=192194, avg=1502.96, stdev=6229.95 00:26:29.504 clat (msec): min=3, max=482, avg=139.38, stdev=110.81 00:26:29.504 lat (msec): min=3, max=482, avg=140.88, stdev=112.08 00:26:29.504 clat percentiles (msec): 00:26:29.504 | 1.00th=[ 17], 5.00th=[ 34], 10.00th=[ 48], 20.00th=[ 51], 00:26:29.504 | 30.00th=[ 53], 40.00th=[ 66], 50.00th=[ 97], 60.00th=[ 131], 00:26:29.504 | 70.00th=[ 169], 80.00th=[ 230], 90.00th=[ 342], 95.00th=[ 368], 00:26:29.504 | 99.00th=[ 426], 99.50th=[ 447], 99.90th=[ 472], 99.95th=[ 481], 00:26:29.504 | 99.99th=[ 485] 00:26:29.504 bw ( KiB/s): min=43008, max=333312, per=11.91%, avg=115210.55, stdev=76674.73, samples=20 00:26:29.504 iops : min= 168, max= 1302, avg=450.00, stdev=299.52, samples=20 00:26:29.504 lat (msec) : 4=0.04%, 10=0.13%, 20=1.53%, 50=14.92%, 100=34.63% 00:26:29.504 lat (msec) : 250=31.08%, 500=17.66% 00:26:29.504 cpu : usr=1.05%, sys=1.39%, ctx=2275, majf=0, minf=1 00:26:29.504 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:29.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.504 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:29.504 issued rwts: total=0,4563,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.504 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:29.504 job5: (groupid=0, jobs=1): err= 0: pid=2755288: Fri Dec 13 03:37:30 2024 00:26:29.504 write: IOPS=286, BW=71.5MiB/s (75.0MB/s)(735MiB/10266msec); 0 zone resets 00:26:29.504 slat (usec): min=22, max=177855, avg=2704.81, stdev=6897.19 00:26:29.504 clat (msec): min=13, max=788, avg=220.28, stdev=100.81 00:26:29.504 lat (msec): min=17, max=788, avg=222.99, stdev=101.67 00:26:29.504 clat percentiles (msec): 00:26:29.504 | 1.00th=[ 26], 5.00th=[ 68], 10.00th=[ 97], 20.00th=[ 136], 00:26:29.504 | 30.00th=[ 155], 40.00th=[ 194], 50.00th=[ 230], 60.00th=[ 245], 00:26:29.504 | 70.00th=[ 255], 80.00th=[ 284], 90.00th=[ 363], 95.00th=[ 401], 00:26:29.504 | 99.00th=[ 493], 99.50th=[ 523], 99.90th=[ 567], 99.95th=[ 760], 00:26:29.504 | 99.99th=[ 793] 00:26:29.504 bw ( KiB/s): min=38912, max=128000, per=7.61%, avg=73581.70, stdev=21729.27, samples=20 00:26:29.504 iops : min= 152, max= 500, avg=287.40, stdev=84.88, samples=20 00:26:29.504 lat (msec) : 20=0.31%, 50=4.12%, 100=6.98%, 250=55.21%, 500=32.51% 00:26:29.504 lat (msec) : 750=0.82%, 1000=0.07% 00:26:29.504 cpu : usr=0.59%, sys=0.94%, ctx=1227, majf=0, minf=1 00:26:29.504 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:29.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.504 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:29.505 issued rwts: total=0,2938,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.505 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:29.505 job6: (groupid=0, jobs=1): err= 0: pid=2755289: Fri Dec 13 03:37:30 2024 00:26:29.505 write: IOPS=319, BW=79.8MiB/s (83.6MB/s)(818MiB/10258msec); 0 zone resets 00:26:29.505 slat (usec): min=30, max=46880, avg=2433.29, stdev=6264.81 00:26:29.505 clat (usec): min=1100, max=656005, avg=197976.32, stdev=123842.65 00:26:29.505 lat (usec): min=1167, max=656078, avg=200409.62, stdev=125399.35 00:26:29.505 clat percentiles (msec): 00:26:29.505 | 1.00th=[ 4], 5.00th=[ 15], 10.00th=[ 28], 20.00th=[ 71], 00:26:29.505 | 30.00th=[ 124], 40.00th=[ 155], 50.00th=[ 186], 60.00th=[ 224], 00:26:29.505 | 70.00th=[ 279], 80.00th=[ 321], 90.00th=[ 363], 95.00th=[ 388], 00:26:29.505 | 99.00th=[ 464], 99.50th=[ 527], 99.90th=[ 642], 99.95th=[ 651], 00:26:29.505 | 99.99th=[ 659] 00:26:29.505 bw ( KiB/s): min=38912, max=274432, per=8.49%, avg=82160.75, stdev=55279.37, samples=20 00:26:29.505 iops : min= 152, max= 1072, avg=320.90, stdev=215.92, samples=20 00:26:29.505 lat (msec) : 2=0.46%, 4=0.61%, 10=2.11%, 20=4.16%, 50=7.82% 00:26:29.505 lat (msec) : 100=9.56%, 250=39.51%, 500=35.11%, 750=0.67% 00:26:29.505 cpu : usr=0.75%, sys=0.97%, ctx=1672, majf=0, minf=1 00:26:29.505 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:29.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:29.505 issued rwts: total=0,3273,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.505 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:29.505 job7: (groupid=0, jobs=1): err= 0: pid=2755290: Fri Dec 13 03:37:30 2024 00:26:29.505 write: IOPS=480, BW=120MiB/s (126MB/s)(1232MiB/10260msec); 0 zone resets 00:26:29.505 slat (usec): min=28, max=116297, avg=1350.95, stdev=4450.02 00:26:29.505 clat (usec): min=1182, max=712883, avg=131820.59, stdev=95275.30 00:26:29.505 lat (usec): min=1240, max=712926, avg=133171.54, stdev=96127.61 00:26:29.505 clat percentiles (msec): 00:26:29.505 | 1.00th=[ 7], 5.00th=[ 34], 10.00th=[ 48], 20.00th=[ 54], 00:26:29.505 | 30.00th=[ 67], 40.00th=[ 95], 50.00th=[ 107], 60.00th=[ 120], 00:26:29.505 | 70.00th=[ 161], 80.00th=[ 199], 90.00th=[ 255], 95.00th=[ 321], 00:26:29.505 | 99.00th=[ 422], 99.50th=[ 584], 99.90th=[ 709], 99.95th=[ 709], 00:26:29.505 | 99.99th=[ 709] 00:26:29.505 bw ( KiB/s): min=46592, max=258560, per=12.87%, avg=124477.95, stdev=60785.53, samples=20 00:26:29.505 iops : min= 182, max= 1010, avg=486.20, stdev=237.46, samples=20 00:26:29.505 lat (msec) : 2=0.18%, 4=0.55%, 10=0.51%, 20=1.50%, 50=8.34% 00:26:29.505 lat (msec) : 100=33.54%, 250=44.99%, 500=9.68%, 750=0.71% 00:26:29.505 cpu : usr=1.17%, sys=1.69%, ctx=2613, majf=0, minf=1 00:26:29.505 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:26:29.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:29.505 issued rwts: total=0,4926,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.505 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:29.505 job8: (groupid=0, jobs=1): err= 0: pid=2755291: Fri Dec 13 03:37:30 2024 00:26:29.505 write: IOPS=329, BW=82.4MiB/s (86.4MB/s)(846MiB/10267msec); 0 zone resets 00:26:29.505 slat (usec): min=19, max=118509, avg=2315.44, stdev=6013.78 00:26:29.505 clat (usec): min=1445, max=519700, avg=191875.84, stdev=104775.33 00:26:29.505 lat (usec): min=1520, max=519764, avg=194191.28, stdev=105956.23 00:26:29.505 clat percentiles (msec): 00:26:29.505 | 1.00th=[ 23], 5.00th=[ 36], 10.00th=[ 50], 20.00th=[ 82], 00:26:29.505 | 30.00th=[ 133], 40.00th=[ 167], 50.00th=[ 203], 60.00th=[ 222], 00:26:29.505 | 70.00th=[ 243], 80.00th=[ 259], 90.00th=[ 326], 95.00th=[ 393], 00:26:29.505 | 99.00th=[ 460], 99.50th=[ 493], 99.90th=[ 514], 99.95th=[ 518], 00:26:29.505 | 99.99th=[ 518] 00:26:29.505 bw ( KiB/s): min=38912, max=173568, per=8.78%, avg=84952.55, stdev=36466.79, samples=20 00:26:29.505 iops : min= 152, max= 678, avg=331.80, stdev=142.40, samples=20 00:26:29.505 lat (msec) : 2=0.12%, 4=0.03%, 10=0.18%, 20=0.47%, 50=9.23% 00:26:29.505 lat (msec) : 100=13.84%, 250=52.99%, 500=22.86%, 750=0.30% 00:26:29.505 cpu : usr=0.70%, sys=1.12%, ctx=1627, majf=0, minf=1 00:26:29.505 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:26:29.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:29.505 issued rwts: total=0,3382,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.505 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:29.505 job9: (groupid=0, jobs=1): err= 0: pid=2755292: Fri Dec 13 03:37:30 2024 00:26:29.505 write: IOPS=277, BW=69.4MiB/s (72.7MB/s)(700MiB/10084msec); 0 zone resets 00:26:29.505 slat (usec): min=23, max=82370, avg=2926.55, stdev=6712.19 00:26:29.505 clat (usec): min=1588, max=447563, avg=227660.21, stdev=100305.28 00:26:29.505 lat (usec): min=1660, max=447610, avg=230586.76, stdev=101353.99 00:26:29.505 clat percentiles (msec): 00:26:29.505 | 1.00th=[ 4], 5.00th=[ 46], 10.00th=[ 93], 20.00th=[ 144], 00:26:29.505 | 30.00th=[ 171], 40.00th=[ 209], 50.00th=[ 236], 60.00th=[ 247], 00:26:29.505 | 70.00th=[ 268], 80.00th=[ 317], 90.00th=[ 372], 95.00th=[ 393], 00:26:29.505 | 99.00th=[ 430], 99.50th=[ 443], 99.90th=[ 447], 99.95th=[ 447], 00:26:29.505 | 99.99th=[ 447] 00:26:29.505 bw ( KiB/s): min=40960, max=109056, per=7.24%, avg=70016.00, stdev=18633.66, samples=20 00:26:29.505 iops : min= 160, max= 426, avg=273.50, stdev=72.79, samples=20 00:26:29.505 lat (msec) : 2=0.07%, 4=0.93%, 10=2.36%, 20=0.21%, 50=2.11% 00:26:29.505 lat (msec) : 100=5.11%, 250=51.86%, 500=37.35% 00:26:29.505 cpu : usr=0.67%, sys=0.90%, ctx=1200, majf=0, minf=1 00:26:29.505 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.7% 00:26:29.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:29.505 issued rwts: total=0,2798,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.505 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:29.505 job10: (groupid=0, jobs=1): err= 0: pid=2755293: Fri Dec 13 03:37:30 2024 00:26:29.505 write: IOPS=266, BW=66.7MiB/s (69.9MB/s)(684MiB/10259msec); 0 zone resets 00:26:29.505 slat (usec): min=30, max=97017, avg=2702.55, stdev=7276.85 00:26:29.505 clat (msec): min=5, max=495, avg=236.46, stdev=119.18 00:26:29.505 lat (msec): min=5, max=495, avg=239.16, stdev=120.73 00:26:29.505 clat percentiles (msec): 00:26:29.505 | 1.00th=[ 16], 5.00th=[ 29], 10.00th=[ 54], 20.00th=[ 121], 00:26:29.505 | 30.00th=[ 159], 40.00th=[ 215], 50.00th=[ 262], 60.00th=[ 284], 00:26:29.505 | 70.00th=[ 313], 80.00th=[ 351], 90.00th=[ 372], 95.00th=[ 409], 00:26:29.505 | 99.00th=[ 472], 99.50th=[ 481], 99.90th=[ 493], 99.95th=[ 493], 00:26:29.505 | 99.99th=[ 498] 00:26:29.505 bw ( KiB/s): min=40960, max=136192, per=7.08%, avg=68454.40, stdev=28219.52, samples=20 00:26:29.505 iops : min= 160, max= 532, avg=267.40, stdev=110.23, samples=20 00:26:29.505 lat (msec) : 10=0.40%, 20=1.17%, 50=7.53%, 100=8.04%, 250=28.57% 00:26:29.505 lat (msec) : 500=54.29% 00:26:29.505 cpu : usr=0.63%, sys=0.97%, ctx=1452, majf=0, minf=1 00:26:29.505 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:29.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:29.505 issued rwts: total=0,2737,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.505 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:29.505 00:26:29.505 Run status group 0 (all jobs): 00:26:29.505 WRITE: bw=945MiB/s (991MB/s), 66.7MiB/s-120MiB/s (69.9MB/s-126MB/s), io=9699MiB (10.2GB), run=10051-10267msec 00:26:29.505 00:26:29.505 Disk stats (read/write): 00:26:29.505 nvme0n1: ios=49/7714, merge=0/0, ticks=43/1239014, in_queue=1239057, util=97.44% 00:26:29.505 nvme10n1: ios=45/7104, merge=0/0, ticks=63/1216541, in_queue=1216604, util=97.70% 00:26:29.505 nvme1n1: ios=49/5689, merge=0/0, ticks=4270/1222464, in_queue=1226734, util=100.00% 00:26:29.505 nvme2n1: ios=0/7232, merge=0/0, ticks=0/1227795, in_queue=1227795, util=97.77% 00:26:29.505 nvme3n1: ios=43/8828, merge=0/0, ticks=2896/1182414, in_queue=1185310, util=100.00% 00:26:29.505 nvme4n1: ios=46/5818, merge=0/0, ticks=1699/1226094, in_queue=1227793, util=100.00% 00:26:29.505 nvme5n1: ios=44/6498, merge=0/0, ticks=2126/1232444, in_queue=1234570, util=100.00% 00:26:29.505 nvme6n1: ios=48/9803, merge=0/0, ticks=2312/1236527, in_queue=1238839, util=100.00% 00:26:29.505 nvme7n1: ios=0/6704, merge=0/0, ticks=0/1232593, in_queue=1232593, util=98.81% 00:26:29.505 nvme8n1: ios=0/5364, merge=0/0, ticks=0/1216509, in_queue=1216509, util=98.93% 00:26:29.505 nvme9n1: ios=43/5425, merge=0/0, ticks=1238/1224202, in_queue=1225440, util=100.00% 00:26:29.505 03:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:29.505 03:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:29.505 03:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.505 03:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:29.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:29.505 03:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:29.505 03:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:29.505 03:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:29.505 03:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:26:29.505 03:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:29.505 03:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:26:29.505 03:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:29.505 03:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:29.764 03:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.764 03:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:29.764 03:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.764 03:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.764 03:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:30.332 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:30.332 03:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:30.332 03:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:30.332 03:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:30.332 03:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:26:30.332 03:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:26:30.332 03:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:30.332 03:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:30.332 03:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:30.332 03:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.332 03:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.332 03:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.332 03:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:30.332 03:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:30.900 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:30.900 03:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:30.900 03:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:30.900 03:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:30.900 03:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:26:30.900 03:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:30.900 03:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:26:30.900 03:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:30.900 03:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:30.900 03:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.900 03:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.900 03:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.900 03:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:30.900 03:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:31.468 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:31.468 03:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:31.468 03:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:31.468 03:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:31.468 03:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:26:31.468 03:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:31.468 03:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:26:31.468 03:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:31.468 03:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:31.468 03:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.468 03:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.468 03:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.468 03:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:31.468 03:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:31.726 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:31.726 03:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:31.726 03:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:31.726 03:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:31.726 03:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:26:31.727 03:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:26:31.727 03:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:31.727 03:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:31.727 03:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:31.727 03:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.727 03:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.986 03:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.986 03:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:31.986 03:37:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:32.245 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:32.245 03:37:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:32.245 03:37:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:32.245 03:37:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:32.245 03:37:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:26:32.245 03:37:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:32.245 03:37:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:26:32.245 03:37:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:32.245 03:37:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:32.245 03:37:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.245 03:37:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.245 03:37:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.245 03:37:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:32.245 03:37:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:32.504 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:32.504 03:37:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:32.504 03:37:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:32.504 03:37:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:32.504 03:37:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:26:32.504 03:37:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:32.504 03:37:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:26:32.504 03:37:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:32.504 03:37:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:32.504 03:37:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.504 03:37:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.504 03:37:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.504 03:37:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:32.504 03:37:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:33.072 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:33.072 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:33.072 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:33.072 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:33.072 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:26:33.072 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:26:33.072 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:33.072 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:33.072 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:33.072 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.072 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.072 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.072 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:33.072 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:33.330 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:33.330 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:33.330 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:33.330 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:33.330 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:26:33.330 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:26:33.330 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:33.330 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:33.330 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:33.330 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.330 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.330 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.330 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:33.330 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:33.896 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:33.896 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:33.896 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:33.896 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:33.896 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:26:33.896 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:26:33.896 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:33.896 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:33.896 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:33.896 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.896 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.896 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.896 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:33.896 03:37:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:34.154 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:34.154 03:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:34.154 03:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:34.154 03:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:34.154 03:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:26:34.154 03:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:34.154 03:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:26:34.154 03:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:34.154 03:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:34.154 03:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.154 03:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.154 03:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.154 03:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:34.154 03:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:34.154 03:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:34.154 03:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:34.154 03:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:34.154 03:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:34.154 03:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:34.154 03:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:34.154 03:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:34.154 rmmod nvme_tcp 00:26:34.154 rmmod nvme_fabrics 00:26:34.154 rmmod nvme_keyring 00:26:34.154 03:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:34.154 03:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:34.154 03:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:34.154 03:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 2747711 ']' 00:26:34.154 03:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 2747711 00:26:34.154 03:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 2747711 ']' 00:26:34.154 03:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 2747711 00:26:34.154 03:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:26:34.154 03:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:34.154 03:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2747711 00:26:34.154 03:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:34.154 03:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:34.154 03:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2747711' 00:26:34.154 killing process with pid 2747711 00:26:34.154 03:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 2747711 00:26:34.154 03:37:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 2747711 00:26:37.441 03:37:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:37.441 03:37:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:37.441 03:37:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:37.441 03:37:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:37.441 03:37:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:26:37.441 03:37:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:26:37.441 03:37:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:37.441 03:37:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:37.441 03:37:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:37.441 03:37:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:37.441 03:37:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:37.441 03:37:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:39.977 00:26:39.977 real 1m17.123s 00:26:39.977 user 4m40.424s 00:26:39.977 sys 0m17.101s 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:39.977 ************************************ 00:26:39.977 END TEST nvmf_multiconnection 00:26:39.977 ************************************ 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:39.977 ************************************ 00:26:39.977 START TEST nvmf_initiator_timeout 00:26:39.977 ************************************ 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:39.977 * Looking for test storage... 00:26:39.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:39.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.977 --rc genhtml_branch_coverage=1 00:26:39.977 --rc genhtml_function_coverage=1 00:26:39.977 --rc genhtml_legend=1 00:26:39.977 --rc geninfo_all_blocks=1 00:26:39.977 --rc geninfo_unexecuted_blocks=1 00:26:39.977 00:26:39.977 ' 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:39.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.977 --rc genhtml_branch_coverage=1 00:26:39.977 --rc genhtml_function_coverage=1 00:26:39.977 --rc genhtml_legend=1 00:26:39.977 --rc geninfo_all_blocks=1 00:26:39.977 --rc geninfo_unexecuted_blocks=1 00:26:39.977 00:26:39.977 ' 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:39.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.977 --rc genhtml_branch_coverage=1 00:26:39.977 --rc genhtml_function_coverage=1 00:26:39.977 --rc genhtml_legend=1 00:26:39.977 --rc geninfo_all_blocks=1 00:26:39.977 --rc geninfo_unexecuted_blocks=1 00:26:39.977 00:26:39.977 ' 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:39.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.977 --rc genhtml_branch_coverage=1 00:26:39.977 --rc genhtml_function_coverage=1 00:26:39.977 --rc genhtml_legend=1 00:26:39.977 --rc geninfo_all_blocks=1 00:26:39.977 --rc geninfo_unexecuted_blocks=1 00:26:39.977 00:26:39.977 ' 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:39.977 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:39.978 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:39.978 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:39.978 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:39.978 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:39.978 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:39.978 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.978 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.978 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.978 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:39.978 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.978 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:39.978 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:39.978 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:39.978 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:39.978 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:39.978 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:39.978 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:39.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:39.978 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:39.978 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:39.978 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:39.978 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:39.978 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:39.978 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:39.978 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:39.978 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:39.978 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:39.978 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:39.978 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:39.978 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.978 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:39.978 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.978 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:39.978 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:39.978 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:39.978 03:37:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:45.251 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:45.251 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:45.251 Found net devices under 0000:af:00.0: cvl_0_0 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:45.251 Found net devices under 0000:af:00.1: cvl_0_1 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:45.251 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:45.511 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:45.511 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:45.511 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:45.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:45.511 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:26:45.511 00:26:45.511 --- 10.0.0.2 ping statistics --- 00:26:45.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.511 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:26:45.511 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:45.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:45.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:26:45.511 00:26:45.511 --- 10.0.0.1 ping statistics --- 00:26:45.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.511 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:26:45.511 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:45.511 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:26:45.511 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:45.511 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:45.511 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:45.511 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:45.511 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:45.511 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:45.511 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:45.511 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:45.511 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:45.511 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:45.511 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:45.511 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=2761090 00:26:45.511 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 2761090 00:26:45.511 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:45.511 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 2761090 ']' 00:26:45.511 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:45.511 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:45.511 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:45.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:45.511 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:45.511 03:37:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:45.511 [2024-12-13 03:37:46.593837] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:26:45.511 [2024-12-13 03:37:46.593933] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:45.511 [2024-12-13 03:37:46.710742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:45.770 [2024-12-13 03:37:46.837787] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:45.770 [2024-12-13 03:37:46.837832] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:45.770 [2024-12-13 03:37:46.837843] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:45.770 [2024-12-13 03:37:46.837854] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:45.770 [2024-12-13 03:37:46.837866] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:45.770 [2024-12-13 03:37:46.840296] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:45.770 [2024-12-13 03:37:46.840312] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:26:45.770 [2024-12-13 03:37:46.840388] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.770 [2024-12-13 03:37:46.840394] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:26:46.338 03:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:46.338 03:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:46.338 03:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:46.338 03:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:46.338 03:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:46.338 03:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:46.338 03:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:46.338 03:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:46.338 03:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.338 03:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:46.338 Malloc0 00:26:46.338 03:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.338 03:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:46.338 03:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.338 03:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:46.338 Delay0 00:26:46.338 03:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.338 03:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:46.338 03:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.338 03:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:46.597 [2024-12-13 03:37:47.551132] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:46.597 03:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.597 03:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:46.597 03:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.597 03:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:46.597 03:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.597 03:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:46.597 03:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.597 03:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:46.597 03:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.597 03:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:46.597 03:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.597 03:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:46.597 [2024-12-13 03:37:47.587456] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:46.597 03:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.597 03:37:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:47.974 03:37:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:47.974 03:37:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:26:47.975 03:37:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:47.975 03:37:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:47.975 03:37:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:26:49.884 03:37:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:49.884 03:37:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:49.884 03:37:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:49.885 03:37:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:49.885 03:37:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:49.885 03:37:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:26:49.885 03:37:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=2761819 00:26:49.885 03:37:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:49.885 03:37:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:49.885 [global] 00:26:49.885 thread=1 00:26:49.885 invalidate=1 00:26:49.885 rw=write 00:26:49.885 time_based=1 00:26:49.885 runtime=60 00:26:49.885 ioengine=libaio 00:26:49.885 direct=1 00:26:49.885 bs=4096 00:26:49.885 iodepth=1 00:26:49.885 norandommap=0 00:26:49.885 numjobs=1 00:26:49.885 00:26:49.885 verify_dump=1 00:26:49.885 verify_backlog=512 00:26:49.885 verify_state_save=0 00:26:49.885 do_verify=1 00:26:49.885 verify=crc32c-intel 00:26:49.885 [job0] 00:26:49.885 filename=/dev/nvme0n1 00:26:49.885 Could not set queue depth (nvme0n1) 00:26:50.142 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:50.142 fio-3.35 00:26:50.142 Starting 1 thread 00:26:52.669 03:37:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:52.669 03:37:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.669 03:37:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:52.669 true 00:26:52.670 03:37:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.670 03:37:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:52.670 03:37:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.670 03:37:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:52.670 true 00:26:52.670 03:37:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.670 03:37:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:52.670 03:37:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.670 03:37:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:52.670 true 00:26:52.670 03:37:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.670 03:37:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:52.670 03:37:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.670 03:37:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:52.670 true 00:26:52.670 03:37:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.670 03:37:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:55.945 03:37:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:55.945 03:37:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.945 03:37:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:55.945 true 00:26:55.945 03:37:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.945 03:37:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:55.945 03:37:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.945 03:37:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:55.945 true 00:26:55.945 03:37:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.945 03:37:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:55.945 03:37:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.945 03:37:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:55.945 true 00:26:55.945 03:37:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.945 03:37:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:55.945 03:37:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.945 03:37:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:55.945 true 00:26:55.945 03:37:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.945 03:37:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:55.945 03:37:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 2761819 00:27:52.157 00:27:52.157 job0: (groupid=0, jobs=1): err= 0: pid=2762087: Fri Dec 13 03:38:51 2024 00:27:52.157 read: IOPS=95, BW=380KiB/s (390kB/s)(22.3MiB/60040msec) 00:27:52.157 slat (nsec): min=6889, max=63482, avg=9014.21, stdev=3953.84 00:27:52.157 clat (usec): min=217, max=42149, avg=2988.13, stdev=10103.94 00:27:52.157 lat (usec): min=224, max=42178, avg=2997.15, stdev=10107.49 00:27:52.157 clat percentiles (usec): 00:27:52.157 | 1.00th=[ 237], 5.00th=[ 265], 10.00th=[ 273], 20.00th=[ 281], 00:27:52.157 | 30.00th=[ 289], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 306], 00:27:52.157 | 70.00th=[ 314], 80.00th=[ 322], 90.00th=[ 461], 95.00th=[41157], 00:27:52.157 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:27:52.157 | 99.99th=[42206] 00:27:52.157 write: IOPS=102, BW=409KiB/s (419kB/s)(24.0MiB/60040msec); 0 zone resets 00:27:52.157 slat (nsec): min=9891, max=45328, avg=11326.78, stdev=2016.55 00:27:52.157 clat (usec): min=158, max=41518k, avg=6969.05, stdev=529669.28 00:27:52.157 lat (usec): min=169, max=41518k, avg=6980.38, stdev=529669.27 00:27:52.157 clat percentiles (usec): 00:27:52.157 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 182], 00:27:52.157 | 20.00th=[ 194], 30.00th=[ 202], 40.00th=[ 204], 00:27:52.157 | 50.00th=[ 208], 60.00th=[ 212], 70.00th=[ 217], 00:27:52.157 | 80.00th=[ 229], 90.00th=[ 241], 95.00th=[ 247], 00:27:52.157 | 99.00th=[ 322], 99.50th=[ 330], 99.90th=[ 347], 00:27:52.157 | 99.95th=[ 355], 99.99th=[17112761] 00:27:52.157 bw ( KiB/s): min= 224, max= 8192, per=100.00%, avg=4915.20, stdev=2936.95, samples=10 00:27:52.157 iops : min= 56, max= 2048, avg=1228.80, stdev=734.24, samples=10 00:27:52.157 lat (usec) : 250=51.00%, 500=45.46%, 750=0.35%, 1000=0.01% 00:27:52.157 lat (msec) : 2=0.01%, 50=3.17%, >=2000=0.01% 00:27:52.157 cpu : usr=0.19%, sys=0.28%, ctx=11856, majf=0, minf=1 00:27:52.157 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:52.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.157 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.157 issued rwts: total=5710,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.157 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:52.157 00:27:52.157 Run status group 0 (all jobs): 00:27:52.157 READ: bw=380KiB/s (390kB/s), 380KiB/s-380KiB/s (390kB/s-390kB/s), io=22.3MiB (23.4MB), run=60040-60040msec 00:27:52.157 WRITE: bw=409KiB/s (419kB/s), 409KiB/s-409KiB/s (419kB/s-419kB/s), io=24.0MiB (25.2MB), run=60040-60040msec 00:27:52.157 00:27:52.157 Disk stats (read/write): 00:27:52.157 nvme0n1: ios=5805/6144, merge=0/0, ticks=16886/1237, in_queue=18123, util=99.56% 00:27:52.157 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:52.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:52.157 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:52.157 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:27:52.157 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:52.157 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:52.157 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:52.157 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:52.157 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:27:52.158 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:52.158 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:52.158 nvmf hotplug test: fio successful as expected 00:27:52.158 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:52.158 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.158 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:52.158 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.158 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:52.158 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:52.158 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:52.158 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:52.158 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:27:52.158 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:52.158 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:27:52.158 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:52.158 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:52.158 rmmod nvme_tcp 00:27:52.158 rmmod nvme_fabrics 00:27:52.158 rmmod nvme_keyring 00:27:52.158 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:52.158 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:27:52.158 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:27:52.158 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 2761090 ']' 00:27:52.158 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 2761090 00:27:52.158 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 2761090 ']' 00:27:52.158 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 2761090 00:27:52.158 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:27:52.158 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:52.158 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2761090 00:27:52.158 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:52.158 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:52.158 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2761090' 00:27:52.158 killing process with pid 2761090 00:27:52.158 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 2761090 00:27:52.158 03:38:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 2761090 00:27:52.158 03:38:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:52.158 03:38:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:52.158 03:38:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:52.158 03:38:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:27:52.158 03:38:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:27:52.158 03:38:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:52.158 03:38:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:27:52.158 03:38:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:52.158 03:38:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:52.158 03:38:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.158 03:38:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:52.158 03:38:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.063 03:38:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:54.063 00:27:54.063 real 1m14.410s 00:27:54.063 user 4m29.613s 00:27:54.063 sys 0m6.357s 00:27:54.063 03:38:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:54.063 03:38:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:54.063 ************************************ 00:27:54.063 END TEST nvmf_initiator_timeout 00:27:54.063 ************************************ 00:27:54.063 03:38:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:27:54.063 03:38:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:27:54.063 03:38:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:27:54.063 03:38:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:27:54.063 03:38:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:00.628 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:00.628 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:28:00.628 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:00.628 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:00.628 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:00.628 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:00.628 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:00.628 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:28:00.628 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:00.628 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:28:00.628 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:28:00.628 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:28:00.628 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:28:00.628 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:28:00.628 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:28:00.628 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:00.628 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:00.628 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:00.628 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:00.628 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:00.628 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:00.628 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:00.628 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:00.628 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:00.628 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:00.628 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:00.628 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:00.629 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:00.629 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:00.629 Found net devices under 0000:af:00.0: cvl_0_0 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:00.629 Found net devices under 0000:af:00.1: cvl_0_1 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:00.629 ************************************ 00:28:00.629 START TEST nvmf_perf_adq 00:28:00.629 ************************************ 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:00.629 * Looking for test storage... 00:28:00.629 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:00.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.629 --rc genhtml_branch_coverage=1 00:28:00.629 --rc genhtml_function_coverage=1 00:28:00.629 --rc genhtml_legend=1 00:28:00.629 --rc geninfo_all_blocks=1 00:28:00.629 --rc geninfo_unexecuted_blocks=1 00:28:00.629 00:28:00.629 ' 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:00.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.629 --rc genhtml_branch_coverage=1 00:28:00.629 --rc genhtml_function_coverage=1 00:28:00.629 --rc genhtml_legend=1 00:28:00.629 --rc geninfo_all_blocks=1 00:28:00.629 --rc geninfo_unexecuted_blocks=1 00:28:00.629 00:28:00.629 ' 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:00.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.629 --rc genhtml_branch_coverage=1 00:28:00.629 --rc genhtml_function_coverage=1 00:28:00.629 --rc genhtml_legend=1 00:28:00.629 --rc geninfo_all_blocks=1 00:28:00.629 --rc geninfo_unexecuted_blocks=1 00:28:00.629 00:28:00.629 ' 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:00.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.629 --rc genhtml_branch_coverage=1 00:28:00.629 --rc genhtml_function_coverage=1 00:28:00.629 --rc genhtml_legend=1 00:28:00.629 --rc geninfo_all_blocks=1 00:28:00.629 --rc geninfo_unexecuted_blocks=1 00:28:00.629 00:28:00.629 ' 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:00.629 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:00.630 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:00.630 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:00.630 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:00.630 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:00.630 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:00.630 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:00.630 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:00.630 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:00.630 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:00.630 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:28:00.630 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:00.630 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:00.630 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:00.630 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.630 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.630 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.630 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:28:00.630 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.630 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:28:00.630 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:00.630 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:00.630 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:00.630 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:00.630 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:00.630 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:00.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:00.630 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:00.630 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:00.630 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:00.630 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:28:00.630 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:00.630 03:39:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:04.858 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:04.858 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:04.858 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:04.859 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:04.859 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:04.859 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:04.859 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:04.859 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:04.859 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:04.859 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:04.859 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:04.859 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:04.859 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:04.859 Found net devices under 0000:af:00.0: cvl_0_0 00:28:04.859 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:04.859 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:04.859 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:04.859 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:04.859 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:04.859 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:04.859 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:04.859 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:04.859 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:04.859 Found net devices under 0000:af:00.1: cvl_0_1 00:28:04.859 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:04.859 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:04.859 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:04.859 03:39:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:28:04.859 03:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:28:04.859 03:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:28:04.859 03:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:04.859 03:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:06.360 03:39:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:08.894 03:39:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:14.168 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:14.168 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:14.168 Found net devices under 0000:af:00.0: cvl_0_0 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:14.168 Found net devices under 0000:af:00.1: cvl_0_1 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:14.168 03:39:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:14.168 03:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:14.168 03:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:14.168 03:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:14.168 03:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:14.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:14.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=1.04 ms 00:28:14.168 00:28:14.168 --- 10.0.0.2 ping statistics --- 00:28:14.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.168 rtt min/avg/max/mdev = 1.038/1.038/1.038/0.000 ms 00:28:14.168 03:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:14.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:14.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:28:14.168 00:28:14.168 --- 10.0.0.1 ping statistics --- 00:28:14.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.168 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:28:14.168 03:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:14.168 03:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:14.168 03:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:14.168 03:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:14.168 03:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:14.168 03:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:14.168 03:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:14.168 03:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:14.168 03:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:14.169 03:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:14.169 03:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:14.169 03:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:14.169 03:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:14.169 03:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2780353 00:28:14.169 03:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2780353 00:28:14.169 03:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:14.169 03:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2780353 ']' 00:28:14.169 03:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:14.169 03:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:14.169 03:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:14.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:14.169 03:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:14.169 03:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:14.169 [2024-12-13 03:39:15.189827] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:28:14.169 [2024-12-13 03:39:15.189915] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:14.169 [2024-12-13 03:39:15.307783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:14.427 [2024-12-13 03:39:15.417257] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:14.427 [2024-12-13 03:39:15.417299] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:14.427 [2024-12-13 03:39:15.417309] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:14.427 [2024-12-13 03:39:15.417319] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:14.427 [2024-12-13 03:39:15.417327] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:14.427 [2024-12-13 03:39:15.419587] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:14.427 [2024-12-13 03:39:15.419662] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:14.427 [2024-12-13 03:39:15.419722] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.427 [2024-12-13 03:39:15.419732] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:14.995 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:14.995 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:14.995 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:14.995 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:14.995 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:14.995 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:14.995 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:28:14.995 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:14.995 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:14.995 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.995 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:14.995 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.995 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:14.995 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:14.995 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.995 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:14.995 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.995 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:14.995 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.995 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:15.254 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.254 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:15.254 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.254 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:15.254 [2024-12-13 03:39:16.426078] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:15.254 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.254 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:15.254 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.254 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:15.513 Malloc1 00:28:15.513 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.513 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:15.513 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.513 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:15.513 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.513 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:15.513 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.513 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:15.513 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.513 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:15.513 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.513 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:15.513 [2024-12-13 03:39:16.544766] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:15.513 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.513 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2780606 00:28:15.513 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:28:15.513 03:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:17.418 03:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:28:17.418 03:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.418 03:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:17.418 03:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.418 03:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:28:17.418 "tick_rate": 2100000000, 00:28:17.418 "poll_groups": [ 00:28:17.418 { 00:28:17.418 "name": "nvmf_tgt_poll_group_000", 00:28:17.418 "admin_qpairs": 1, 00:28:17.418 "io_qpairs": 1, 00:28:17.418 "current_admin_qpairs": 1, 00:28:17.418 "current_io_qpairs": 1, 00:28:17.418 "pending_bdev_io": 0, 00:28:17.418 "completed_nvme_io": 18810, 00:28:17.418 "transports": [ 00:28:17.418 { 00:28:17.418 "trtype": "TCP" 00:28:17.418 } 00:28:17.418 ] 00:28:17.418 }, 00:28:17.418 { 00:28:17.418 "name": "nvmf_tgt_poll_group_001", 00:28:17.418 "admin_qpairs": 0, 00:28:17.418 "io_qpairs": 1, 00:28:17.418 "current_admin_qpairs": 0, 00:28:17.418 "current_io_qpairs": 1, 00:28:17.418 "pending_bdev_io": 0, 00:28:17.418 "completed_nvme_io": 18727, 00:28:17.418 "transports": [ 00:28:17.418 { 00:28:17.418 "trtype": "TCP" 00:28:17.418 } 00:28:17.418 ] 00:28:17.418 }, 00:28:17.418 { 00:28:17.419 "name": "nvmf_tgt_poll_group_002", 00:28:17.419 "admin_qpairs": 0, 00:28:17.419 "io_qpairs": 1, 00:28:17.419 "current_admin_qpairs": 0, 00:28:17.419 "current_io_qpairs": 1, 00:28:17.419 "pending_bdev_io": 0, 00:28:17.419 "completed_nvme_io": 19090, 00:28:17.419 "transports": [ 00:28:17.419 { 00:28:17.419 "trtype": "TCP" 00:28:17.419 } 00:28:17.419 ] 00:28:17.419 }, 00:28:17.419 { 00:28:17.419 "name": "nvmf_tgt_poll_group_003", 00:28:17.419 "admin_qpairs": 0, 00:28:17.419 "io_qpairs": 1, 00:28:17.419 "current_admin_qpairs": 0, 00:28:17.419 "current_io_qpairs": 1, 00:28:17.419 "pending_bdev_io": 0, 00:28:17.419 "completed_nvme_io": 18863, 00:28:17.419 "transports": [ 00:28:17.419 { 00:28:17.419 "trtype": "TCP" 00:28:17.419 } 00:28:17.419 ] 00:28:17.419 } 00:28:17.419 ] 00:28:17.419 }' 00:28:17.419 03:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:17.419 03:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:28:17.419 03:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:28:17.419 03:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:28:17.419 03:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2780606 00:28:27.397 Initializing NVMe Controllers 00:28:27.397 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:27.397 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:27.397 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:27.397 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:27.397 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:27.397 Initialization complete. Launching workers. 00:28:27.397 ======================================================== 00:28:27.397 Latency(us) 00:28:27.397 Device Information : IOPS MiB/s Average min max 00:28:27.397 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10253.00 40.05 6242.79 2362.74 10629.24 00:28:27.397 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10230.10 39.96 6254.78 1865.22 11178.15 00:28:27.397 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10346.30 40.42 6185.48 2490.11 11293.42 00:28:27.397 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10262.40 40.09 6235.56 2249.61 10987.35 00:28:27.397 ======================================================== 00:28:27.397 Total : 41091.80 160.51 6229.54 1865.22 11293.42 00:28:27.397 00:28:27.397 03:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:27.397 03:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:27.397 03:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:27.397 03:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:27.397 03:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:27.397 03:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:27.397 03:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:27.397 rmmod nvme_tcp 00:28:27.397 rmmod nvme_fabrics 00:28:27.397 rmmod nvme_keyring 00:28:27.397 03:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:27.397 03:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:27.397 03:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:27.397 03:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2780353 ']' 00:28:27.397 03:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2780353 00:28:27.397 03:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2780353 ']' 00:28:27.397 03:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2780353 00:28:27.397 03:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:27.397 03:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:27.397 03:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2780353 00:28:27.397 03:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:27.397 03:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:27.397 03:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2780353' 00:28:27.397 killing process with pid 2780353 00:28:27.397 03:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2780353 00:28:27.397 03:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2780353 00:28:27.397 03:39:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:27.397 03:39:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:27.397 03:39:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:27.397 03:39:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:27.398 03:39:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:27.398 03:39:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:27.398 03:39:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:27.398 03:39:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:27.398 03:39:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:27.398 03:39:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:27.398 03:39:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:27.398 03:39:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:29.303 03:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:29.303 03:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:29.303 03:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:29.303 03:39:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:30.680 03:39:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:33.214 03:39:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:38.486 03:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:38.486 03:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:38.486 03:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:38.486 03:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:38.486 03:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:38.486 03:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:38.486 03:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:38.486 03:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:38.486 03:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.486 03:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:38.486 03:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:38.487 03:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:38.487 03:39:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:38.487 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:38.487 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:38.487 Found net devices under 0000:af:00.0: cvl_0_0 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:38.487 Found net devices under 0000:af:00.1: cvl_0_1 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:38.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:38.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.974 ms 00:28:38.487 00:28:38.487 --- 10.0.0.2 ping statistics --- 00:28:38.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:38.487 rtt min/avg/max/mdev = 0.974/0.974/0.974/0.000 ms 00:28:38.487 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:38.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:38.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:28:38.487 00:28:38.487 --- 10.0.0.1 ping statistics --- 00:28:38.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:38.487 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:28:38.488 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:38.488 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:38.488 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:38.488 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:38.488 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:38.488 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:38.488 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:38.488 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:38.488 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:38.488 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:38.488 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:38.488 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:38.488 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:38.488 net.core.busy_poll = 1 00:28:38.488 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:38.488 net.core.busy_read = 1 00:28:38.488 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:38.488 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:38.488 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:38.488 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:38.488 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:38.488 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:38.488 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:38.488 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:38.488 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.488 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2784628 00:28:38.488 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2784628 00:28:38.488 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:38.488 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2784628 ']' 00:28:38.488 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:38.488 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:38.488 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:38.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:38.488 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:38.488 03:39:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.488 [2024-12-13 03:39:39.623467] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:28:38.488 [2024-12-13 03:39:39.623554] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:38.747 [2024-12-13 03:39:39.741005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:38.747 [2024-12-13 03:39:39.853912] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:38.747 [2024-12-13 03:39:39.853957] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:38.747 [2024-12-13 03:39:39.853968] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:38.747 [2024-12-13 03:39:39.853979] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:38.747 [2024-12-13 03:39:39.853987] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:38.747 [2024-12-13 03:39:39.856335] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:38.747 [2024-12-13 03:39:39.856351] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:38.747 [2024-12-13 03:39:39.856449] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.747 [2024-12-13 03:39:39.856457] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:39.315 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:39.315 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:39.315 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:39.315 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:39.315 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:39.315 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:39.315 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:39.315 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:39.315 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:39.315 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.315 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:39.315 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.315 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:39.315 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:39.315 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.315 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:39.574 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.574 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:39.574 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.574 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:39.833 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.833 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:39.833 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.833 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:39.833 [2024-12-13 03:39:40.873976] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:39.833 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.833 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:39.833 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.833 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:39.833 Malloc1 00:28:39.833 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.833 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:39.833 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.833 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:39.833 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.833 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:39.833 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.833 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:39.833 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.833 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:39.833 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.833 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:39.833 [2024-12-13 03:39:40.994283] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:39.833 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.833 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2784880 00:28:39.833 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:39.833 03:39:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:42.371 03:39:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:42.371 03:39:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.371 03:39:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:42.371 03:39:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.371 03:39:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:42.371 "tick_rate": 2100000000, 00:28:42.371 "poll_groups": [ 00:28:42.371 { 00:28:42.371 "name": "nvmf_tgt_poll_group_000", 00:28:42.371 "admin_qpairs": 1, 00:28:42.371 "io_qpairs": 1, 00:28:42.371 "current_admin_qpairs": 1, 00:28:42.371 "current_io_qpairs": 1, 00:28:42.371 "pending_bdev_io": 0, 00:28:42.371 "completed_nvme_io": 25611, 00:28:42.371 "transports": [ 00:28:42.371 { 00:28:42.371 "trtype": "TCP" 00:28:42.371 } 00:28:42.371 ] 00:28:42.371 }, 00:28:42.371 { 00:28:42.371 "name": "nvmf_tgt_poll_group_001", 00:28:42.371 "admin_qpairs": 0, 00:28:42.371 "io_qpairs": 3, 00:28:42.371 "current_admin_qpairs": 0, 00:28:42.371 "current_io_qpairs": 3, 00:28:42.371 "pending_bdev_io": 0, 00:28:42.371 "completed_nvme_io": 25843, 00:28:42.371 "transports": [ 00:28:42.371 { 00:28:42.371 "trtype": "TCP" 00:28:42.371 } 00:28:42.371 ] 00:28:42.371 }, 00:28:42.371 { 00:28:42.371 "name": "nvmf_tgt_poll_group_002", 00:28:42.371 "admin_qpairs": 0, 00:28:42.371 "io_qpairs": 0, 00:28:42.371 "current_admin_qpairs": 0, 00:28:42.371 "current_io_qpairs": 0, 00:28:42.371 "pending_bdev_io": 0, 00:28:42.371 "completed_nvme_io": 0, 00:28:42.371 "transports": [ 00:28:42.371 { 00:28:42.371 "trtype": "TCP" 00:28:42.371 } 00:28:42.371 ] 00:28:42.371 }, 00:28:42.371 { 00:28:42.371 "name": "nvmf_tgt_poll_group_003", 00:28:42.371 "admin_qpairs": 0, 00:28:42.371 "io_qpairs": 0, 00:28:42.371 "current_admin_qpairs": 0, 00:28:42.371 "current_io_qpairs": 0, 00:28:42.371 "pending_bdev_io": 0, 00:28:42.371 "completed_nvme_io": 0, 00:28:42.371 "transports": [ 00:28:42.371 { 00:28:42.371 "trtype": "TCP" 00:28:42.371 } 00:28:42.371 ] 00:28:42.371 } 00:28:42.371 ] 00:28:42.371 }' 00:28:42.371 03:39:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:42.371 03:39:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:42.371 03:39:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:28:42.371 03:39:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:28:42.371 03:39:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2784880 00:28:50.493 Initializing NVMe Controllers 00:28:50.493 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:50.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:50.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:50.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:50.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:50.493 Initialization complete. Launching workers. 00:28:50.493 ======================================================== 00:28:50.493 Latency(us) 00:28:50.493 Device Information : IOPS MiB/s Average min max 00:28:50.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 14084.80 55.02 4543.57 1719.06 6724.37 00:28:50.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4697.20 18.35 13628.45 1775.09 61151.57 00:28:50.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4687.80 18.31 13686.01 2011.99 60751.14 00:28:50.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4645.30 18.15 13781.25 1922.66 62358.70 00:28:50.493 ======================================================== 00:28:50.493 Total : 28115.09 109.82 9112.05 1719.06 62358.70 00:28:50.493 00:28:50.493 03:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:50.493 03:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:50.493 03:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:50.493 03:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:50.493 03:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:50.493 03:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:50.493 03:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:50.493 rmmod nvme_tcp 00:28:50.493 rmmod nvme_fabrics 00:28:50.493 rmmod nvme_keyring 00:28:50.493 03:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:50.493 03:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:50.494 03:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:50.494 03:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2784628 ']' 00:28:50.494 03:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2784628 00:28:50.494 03:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2784628 ']' 00:28:50.494 03:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2784628 00:28:50.494 03:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:50.494 03:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:50.494 03:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2784628 00:28:50.494 03:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:50.494 03:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:50.494 03:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2784628' 00:28:50.494 killing process with pid 2784628 00:28:50.494 03:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2784628 00:28:50.494 03:39:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2784628 00:28:51.873 03:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:51.873 03:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:51.873 03:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:51.873 03:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:51.873 03:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:51.873 03:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:51.873 03:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:51.873 03:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:51.873 03:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:51.873 03:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.873 03:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:51.873 03:39:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.784 03:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:53.784 03:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:28:53.784 00:28:53.784 real 0m54.158s 00:28:53.784 user 2m58.618s 00:28:53.784 sys 0m10.250s 00:28:53.784 03:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:53.784 03:39:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:53.784 ************************************ 00:28:53.784 END TEST nvmf_perf_adq 00:28:53.784 ************************************ 00:28:53.784 03:39:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:53.784 03:39:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:53.784 03:39:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:53.784 03:39:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:53.784 ************************************ 00:28:53.784 START TEST nvmf_shutdown 00:28:53.784 ************************************ 00:28:53.784 03:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:53.784 * Looking for test storage... 00:28:53.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:53.784 03:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:53.784 03:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:28:53.784 03:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:54.044 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:54.044 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:54.044 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:54.044 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:54.044 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:54.044 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:54.044 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:54.044 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:54.044 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:54.044 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:54.044 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:54.044 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:54.044 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:54.044 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:54.044 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:54.044 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:54.044 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:54.044 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:54.044 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:54.044 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:54.044 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:54.044 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:54.044 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:54.044 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:54.044 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:54.044 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:54.044 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:54.044 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:54.044 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:54.044 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:54.044 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:54.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.044 --rc genhtml_branch_coverage=1 00:28:54.044 --rc genhtml_function_coverage=1 00:28:54.044 --rc genhtml_legend=1 00:28:54.044 --rc geninfo_all_blocks=1 00:28:54.044 --rc geninfo_unexecuted_blocks=1 00:28:54.044 00:28:54.044 ' 00:28:54.044 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:54.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.044 --rc genhtml_branch_coverage=1 00:28:54.044 --rc genhtml_function_coverage=1 00:28:54.044 --rc genhtml_legend=1 00:28:54.044 --rc geninfo_all_blocks=1 00:28:54.044 --rc geninfo_unexecuted_blocks=1 00:28:54.044 00:28:54.044 ' 00:28:54.044 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:54.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.044 --rc genhtml_branch_coverage=1 00:28:54.044 --rc genhtml_function_coverage=1 00:28:54.044 --rc genhtml_legend=1 00:28:54.044 --rc geninfo_all_blocks=1 00:28:54.045 --rc geninfo_unexecuted_blocks=1 00:28:54.045 00:28:54.045 ' 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:54.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.045 --rc genhtml_branch_coverage=1 00:28:54.045 --rc genhtml_function_coverage=1 00:28:54.045 --rc genhtml_legend=1 00:28:54.045 --rc geninfo_all_blocks=1 00:28:54.045 --rc geninfo_unexecuted_blocks=1 00:28:54.045 00:28:54.045 ' 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:54.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:54.045 ************************************ 00:28:54.045 START TEST nvmf_shutdown_tc1 00:28:54.045 ************************************ 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:54.045 03:39:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:59.321 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:59.321 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:59.321 Found net devices under 0000:af:00.0: cvl_0_0 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:59.321 Found net devices under 0000:af:00.1: cvl_0_1 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:59.321 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:59.322 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:59.322 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:59.322 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:59.322 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:59.322 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:59.322 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:59.322 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:59.322 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:59.322 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:59.322 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:59.322 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:59.322 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:59.322 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:59.322 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:59.322 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:59.322 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:59.322 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:59.322 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:59.322 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:59.322 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:59.322 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:59.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:59.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:28:59.322 00:28:59.322 --- 10.0.0.2 ping statistics --- 00:28:59.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.322 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:28:59.322 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:59.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:59.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:28:59.322 00:28:59.322 --- 10.0.0.1 ping statistics --- 00:28:59.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.322 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:28:59.322 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:59.322 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:28:59.322 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:59.322 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:59.322 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:59.322 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:59.322 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:59.322 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:59.322 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:59.322 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:59.322 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:59.322 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:59.322 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:59.322 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2790216 00:28:59.581 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2790216 00:28:59.581 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:59.581 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2790216 ']' 00:28:59.581 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.581 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:59.581 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.582 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:59.582 03:40:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:59.582 [2024-12-13 03:40:00.607060] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:28:59.582 [2024-12-13 03:40:00.607147] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:59.582 [2024-12-13 03:40:00.724002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:59.841 [2024-12-13 03:40:00.829807] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:59.841 [2024-12-13 03:40:00.829848] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:59.841 [2024-12-13 03:40:00.829858] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:59.841 [2024-12-13 03:40:00.829871] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:59.841 [2024-12-13 03:40:00.829879] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:59.841 [2024-12-13 03:40:00.832257] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:59.841 [2024-12-13 03:40:00.832328] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:59.841 [2024-12-13 03:40:00.832413] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.841 [2024-12-13 03:40:00.832437] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:00.410 [2024-12-13 03:40:01.472266] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.410 03:40:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:00.670 Malloc1 00:29:00.670 [2024-12-13 03:40:01.644805] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:00.670 Malloc2 00:29:00.670 Malloc3 00:29:00.929 Malloc4 00:29:00.929 Malloc5 00:29:00.929 Malloc6 00:29:01.188 Malloc7 00:29:01.188 Malloc8 00:29:01.449 Malloc9 00:29:01.449 Malloc10 00:29:01.449 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.449 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:01.449 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:01.449 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:01.449 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2790499 00:29:01.449 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2790499 /var/tmp/bdevperf.sock 00:29:01.449 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2790499 ']' 00:29:01.449 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:01.449 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:01.449 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:01.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:01.449 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:01.449 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:01.449 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:29:01.449 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:01.449 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:29:01.449 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:29:01.449 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.449 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.449 { 00:29:01.449 "params": { 00:29:01.449 "name": "Nvme$subsystem", 00:29:01.449 "trtype": "$TEST_TRANSPORT", 00:29:01.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.449 "adrfam": "ipv4", 00:29:01.449 "trsvcid": "$NVMF_PORT", 00:29:01.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.449 "hdgst": ${hdgst:-false}, 00:29:01.449 "ddgst": ${ddgst:-false} 00:29:01.449 }, 00:29:01.449 "method": "bdev_nvme_attach_controller" 00:29:01.449 } 00:29:01.449 EOF 00:29:01.449 )") 00:29:01.449 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:01.449 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.449 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.449 { 00:29:01.449 "params": { 00:29:01.449 "name": "Nvme$subsystem", 00:29:01.449 "trtype": "$TEST_TRANSPORT", 00:29:01.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.449 "adrfam": "ipv4", 00:29:01.449 "trsvcid": "$NVMF_PORT", 00:29:01.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.449 "hdgst": ${hdgst:-false}, 00:29:01.449 "ddgst": ${ddgst:-false} 00:29:01.449 }, 00:29:01.449 "method": "bdev_nvme_attach_controller" 00:29:01.449 } 00:29:01.449 EOF 00:29:01.449 )") 00:29:01.449 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:01.449 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.449 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.449 { 00:29:01.449 "params": { 00:29:01.449 "name": "Nvme$subsystem", 00:29:01.449 "trtype": "$TEST_TRANSPORT", 00:29:01.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.449 "adrfam": "ipv4", 00:29:01.449 "trsvcid": "$NVMF_PORT", 00:29:01.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.449 "hdgst": ${hdgst:-false}, 00:29:01.449 "ddgst": ${ddgst:-false} 00:29:01.449 }, 00:29:01.449 "method": "bdev_nvme_attach_controller" 00:29:01.449 } 00:29:01.449 EOF 00:29:01.449 )") 00:29:01.449 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:01.449 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.449 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.449 { 00:29:01.449 "params": { 00:29:01.449 "name": "Nvme$subsystem", 00:29:01.449 "trtype": "$TEST_TRANSPORT", 00:29:01.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.449 "adrfam": "ipv4", 00:29:01.449 "trsvcid": "$NVMF_PORT", 00:29:01.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.449 "hdgst": ${hdgst:-false}, 00:29:01.449 "ddgst": ${ddgst:-false} 00:29:01.449 }, 00:29:01.449 "method": "bdev_nvme_attach_controller" 00:29:01.449 } 00:29:01.449 EOF 00:29:01.449 )") 00:29:01.449 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:01.449 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.449 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.449 { 00:29:01.449 "params": { 00:29:01.449 "name": "Nvme$subsystem", 00:29:01.449 "trtype": "$TEST_TRANSPORT", 00:29:01.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.449 "adrfam": "ipv4", 00:29:01.449 "trsvcid": "$NVMF_PORT", 00:29:01.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.449 "hdgst": ${hdgst:-false}, 00:29:01.449 "ddgst": ${ddgst:-false} 00:29:01.449 }, 00:29:01.449 "method": "bdev_nvme_attach_controller" 00:29:01.449 } 00:29:01.449 EOF 00:29:01.449 )") 00:29:01.450 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:01.450 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.450 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.450 { 00:29:01.450 "params": { 00:29:01.450 "name": "Nvme$subsystem", 00:29:01.450 "trtype": "$TEST_TRANSPORT", 00:29:01.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.450 "adrfam": "ipv4", 00:29:01.450 "trsvcid": "$NVMF_PORT", 00:29:01.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.450 "hdgst": ${hdgst:-false}, 00:29:01.450 "ddgst": ${ddgst:-false} 00:29:01.450 }, 00:29:01.450 "method": "bdev_nvme_attach_controller" 00:29:01.450 } 00:29:01.450 EOF 00:29:01.450 )") 00:29:01.450 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:01.450 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.450 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.450 { 00:29:01.450 "params": { 00:29:01.450 "name": "Nvme$subsystem", 00:29:01.450 "trtype": "$TEST_TRANSPORT", 00:29:01.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.450 "adrfam": "ipv4", 00:29:01.450 "trsvcid": "$NVMF_PORT", 00:29:01.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.450 "hdgst": ${hdgst:-false}, 00:29:01.450 "ddgst": ${ddgst:-false} 00:29:01.450 }, 00:29:01.450 "method": "bdev_nvme_attach_controller" 00:29:01.450 } 00:29:01.450 EOF 00:29:01.450 )") 00:29:01.450 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:01.450 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.450 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.450 { 00:29:01.450 "params": { 00:29:01.450 "name": "Nvme$subsystem", 00:29:01.450 "trtype": "$TEST_TRANSPORT", 00:29:01.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.450 "adrfam": "ipv4", 00:29:01.450 "trsvcid": "$NVMF_PORT", 00:29:01.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.450 "hdgst": ${hdgst:-false}, 00:29:01.450 "ddgst": ${ddgst:-false} 00:29:01.450 }, 00:29:01.450 "method": "bdev_nvme_attach_controller" 00:29:01.450 } 00:29:01.450 EOF 00:29:01.450 )") 00:29:01.450 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:01.450 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.450 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.450 { 00:29:01.450 "params": { 00:29:01.450 "name": "Nvme$subsystem", 00:29:01.450 "trtype": "$TEST_TRANSPORT", 00:29:01.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.450 "adrfam": "ipv4", 00:29:01.450 "trsvcid": "$NVMF_PORT", 00:29:01.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.450 "hdgst": ${hdgst:-false}, 00:29:01.450 "ddgst": ${ddgst:-false} 00:29:01.450 }, 00:29:01.450 "method": "bdev_nvme_attach_controller" 00:29:01.450 } 00:29:01.450 EOF 00:29:01.450 )") 00:29:01.450 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:01.450 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.450 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.450 { 00:29:01.450 "params": { 00:29:01.450 "name": "Nvme$subsystem", 00:29:01.450 "trtype": "$TEST_TRANSPORT", 00:29:01.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.450 "adrfam": "ipv4", 00:29:01.450 "trsvcid": "$NVMF_PORT", 00:29:01.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.450 "hdgst": ${hdgst:-false}, 00:29:01.450 "ddgst": ${ddgst:-false} 00:29:01.450 }, 00:29:01.450 "method": "bdev_nvme_attach_controller" 00:29:01.450 } 00:29:01.450 EOF 00:29:01.450 )") 00:29:01.450 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:01.450 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:29:01.450 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:29:01.450 03:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:01.450 "params": { 00:29:01.450 "name": "Nvme1", 00:29:01.450 "trtype": "tcp", 00:29:01.450 "traddr": "10.0.0.2", 00:29:01.450 "adrfam": "ipv4", 00:29:01.450 "trsvcid": "4420", 00:29:01.450 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:01.450 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:01.450 "hdgst": false, 00:29:01.450 "ddgst": false 00:29:01.450 }, 00:29:01.450 "method": "bdev_nvme_attach_controller" 00:29:01.450 },{ 00:29:01.450 "params": { 00:29:01.450 "name": "Nvme2", 00:29:01.450 "trtype": "tcp", 00:29:01.450 "traddr": "10.0.0.2", 00:29:01.450 "adrfam": "ipv4", 00:29:01.450 "trsvcid": "4420", 00:29:01.450 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:01.450 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:01.450 "hdgst": false, 00:29:01.450 "ddgst": false 00:29:01.450 }, 00:29:01.450 "method": "bdev_nvme_attach_controller" 00:29:01.450 },{ 00:29:01.450 "params": { 00:29:01.450 "name": "Nvme3", 00:29:01.450 "trtype": "tcp", 00:29:01.450 "traddr": "10.0.0.2", 00:29:01.450 "adrfam": "ipv4", 00:29:01.450 "trsvcid": "4420", 00:29:01.450 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:01.450 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:01.450 "hdgst": false, 00:29:01.450 "ddgst": false 00:29:01.450 }, 00:29:01.450 "method": "bdev_nvme_attach_controller" 00:29:01.450 },{ 00:29:01.450 "params": { 00:29:01.450 "name": "Nvme4", 00:29:01.450 "trtype": "tcp", 00:29:01.450 "traddr": "10.0.0.2", 00:29:01.450 "adrfam": "ipv4", 00:29:01.450 "trsvcid": "4420", 00:29:01.450 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:01.450 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:01.450 "hdgst": false, 00:29:01.450 "ddgst": false 00:29:01.450 }, 00:29:01.450 "method": "bdev_nvme_attach_controller" 00:29:01.450 },{ 00:29:01.450 "params": { 00:29:01.450 "name": "Nvme5", 00:29:01.450 "trtype": "tcp", 00:29:01.450 "traddr": "10.0.0.2", 00:29:01.450 "adrfam": "ipv4", 00:29:01.450 "trsvcid": "4420", 00:29:01.450 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:01.450 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:01.450 "hdgst": false, 00:29:01.450 "ddgst": false 00:29:01.450 }, 00:29:01.450 "method": "bdev_nvme_attach_controller" 00:29:01.450 },{ 00:29:01.450 "params": { 00:29:01.450 "name": "Nvme6", 00:29:01.450 "trtype": "tcp", 00:29:01.450 "traddr": "10.0.0.2", 00:29:01.450 "adrfam": "ipv4", 00:29:01.450 "trsvcid": "4420", 00:29:01.450 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:01.450 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:01.450 "hdgst": false, 00:29:01.450 "ddgst": false 00:29:01.450 }, 00:29:01.450 "method": "bdev_nvme_attach_controller" 00:29:01.450 },{ 00:29:01.450 "params": { 00:29:01.451 "name": "Nvme7", 00:29:01.451 "trtype": "tcp", 00:29:01.451 "traddr": "10.0.0.2", 00:29:01.451 "adrfam": "ipv4", 00:29:01.451 "trsvcid": "4420", 00:29:01.451 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:01.451 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:01.451 "hdgst": false, 00:29:01.451 "ddgst": false 00:29:01.451 }, 00:29:01.451 "method": "bdev_nvme_attach_controller" 00:29:01.451 },{ 00:29:01.451 "params": { 00:29:01.451 "name": "Nvme8", 00:29:01.451 "trtype": "tcp", 00:29:01.451 "traddr": "10.0.0.2", 00:29:01.451 "adrfam": "ipv4", 00:29:01.451 "trsvcid": "4420", 00:29:01.451 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:01.451 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:01.451 "hdgst": false, 00:29:01.451 "ddgst": false 00:29:01.451 }, 00:29:01.451 "method": "bdev_nvme_attach_controller" 00:29:01.451 },{ 00:29:01.451 "params": { 00:29:01.451 "name": "Nvme9", 00:29:01.451 "trtype": "tcp", 00:29:01.451 "traddr": "10.0.0.2", 00:29:01.451 "adrfam": "ipv4", 00:29:01.451 "trsvcid": "4420", 00:29:01.451 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:01.451 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:01.451 "hdgst": false, 00:29:01.451 "ddgst": false 00:29:01.451 }, 00:29:01.451 "method": "bdev_nvme_attach_controller" 00:29:01.451 },{ 00:29:01.451 "params": { 00:29:01.451 "name": "Nvme10", 00:29:01.451 "trtype": "tcp", 00:29:01.451 "traddr": "10.0.0.2", 00:29:01.451 "adrfam": "ipv4", 00:29:01.451 "trsvcid": "4420", 00:29:01.451 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:01.451 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:01.451 "hdgst": false, 00:29:01.451 "ddgst": false 00:29:01.451 }, 00:29:01.451 "method": "bdev_nvme_attach_controller" 00:29:01.451 }' 00:29:01.451 [2024-12-13 03:40:02.642013] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:01.451 [2024-12-13 03:40:02.642121] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:01.711 [2024-12-13 03:40:02.760742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.711 [2024-12-13 03:40:02.873176] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:03.619 03:40:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:03.619 03:40:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:29:03.619 03:40:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:03.619 03:40:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.619 03:40:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:03.619 03:40:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.619 03:40:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2790499 00:29:03.619 03:40:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:29:03.619 03:40:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:29:04.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2790499 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:29:04.557 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2790216 00:29:04.557 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:04.557 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:04.557 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:29:04.557 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:29:04.557 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:04.557 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:04.557 { 00:29:04.557 "params": { 00:29:04.557 "name": "Nvme$subsystem", 00:29:04.557 "trtype": "$TEST_TRANSPORT", 00:29:04.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.557 "adrfam": "ipv4", 00:29:04.557 "trsvcid": "$NVMF_PORT", 00:29:04.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.557 "hdgst": ${hdgst:-false}, 00:29:04.557 "ddgst": ${ddgst:-false} 00:29:04.557 }, 00:29:04.557 "method": "bdev_nvme_attach_controller" 00:29:04.557 } 00:29:04.557 EOF 00:29:04.557 )") 00:29:04.557 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:04.557 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:04.557 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:04.557 { 00:29:04.557 "params": { 00:29:04.557 "name": "Nvme$subsystem", 00:29:04.557 "trtype": "$TEST_TRANSPORT", 00:29:04.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.557 "adrfam": "ipv4", 00:29:04.557 "trsvcid": "$NVMF_PORT", 00:29:04.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.557 "hdgst": ${hdgst:-false}, 00:29:04.557 "ddgst": ${ddgst:-false} 00:29:04.557 }, 00:29:04.557 "method": "bdev_nvme_attach_controller" 00:29:04.557 } 00:29:04.557 EOF 00:29:04.557 )") 00:29:04.557 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:04.557 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:04.557 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:04.557 { 00:29:04.557 "params": { 00:29:04.557 "name": "Nvme$subsystem", 00:29:04.557 "trtype": "$TEST_TRANSPORT", 00:29:04.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.557 "adrfam": "ipv4", 00:29:04.557 "trsvcid": "$NVMF_PORT", 00:29:04.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.557 "hdgst": ${hdgst:-false}, 00:29:04.557 "ddgst": ${ddgst:-false} 00:29:04.557 }, 00:29:04.557 "method": "bdev_nvme_attach_controller" 00:29:04.557 } 00:29:04.557 EOF 00:29:04.557 )") 00:29:04.557 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:04.557 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:04.557 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:04.557 { 00:29:04.557 "params": { 00:29:04.557 "name": "Nvme$subsystem", 00:29:04.557 "trtype": "$TEST_TRANSPORT", 00:29:04.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.557 "adrfam": "ipv4", 00:29:04.557 "trsvcid": "$NVMF_PORT", 00:29:04.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.557 "hdgst": ${hdgst:-false}, 00:29:04.557 "ddgst": ${ddgst:-false} 00:29:04.557 }, 00:29:04.557 "method": "bdev_nvme_attach_controller" 00:29:04.557 } 00:29:04.557 EOF 00:29:04.557 )") 00:29:04.557 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:04.557 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:04.557 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:04.557 { 00:29:04.557 "params": { 00:29:04.557 "name": "Nvme$subsystem", 00:29:04.557 "trtype": "$TEST_TRANSPORT", 00:29:04.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.557 "adrfam": "ipv4", 00:29:04.557 "trsvcid": "$NVMF_PORT", 00:29:04.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.557 "hdgst": ${hdgst:-false}, 00:29:04.557 "ddgst": ${ddgst:-false} 00:29:04.557 }, 00:29:04.557 "method": "bdev_nvme_attach_controller" 00:29:04.557 } 00:29:04.557 EOF 00:29:04.557 )") 00:29:04.557 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:04.557 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:04.557 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:04.557 { 00:29:04.557 "params": { 00:29:04.557 "name": "Nvme$subsystem", 00:29:04.557 "trtype": "$TEST_TRANSPORT", 00:29:04.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.557 "adrfam": "ipv4", 00:29:04.557 "trsvcid": "$NVMF_PORT", 00:29:04.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.557 "hdgst": ${hdgst:-false}, 00:29:04.557 "ddgst": ${ddgst:-false} 00:29:04.557 }, 00:29:04.557 "method": "bdev_nvme_attach_controller" 00:29:04.557 } 00:29:04.557 EOF 00:29:04.557 )") 00:29:04.557 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:04.557 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:04.557 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:04.557 { 00:29:04.557 "params": { 00:29:04.557 "name": "Nvme$subsystem", 00:29:04.557 "trtype": "$TEST_TRANSPORT", 00:29:04.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.557 "adrfam": "ipv4", 00:29:04.557 "trsvcid": "$NVMF_PORT", 00:29:04.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.557 "hdgst": ${hdgst:-false}, 00:29:04.557 "ddgst": ${ddgst:-false} 00:29:04.557 }, 00:29:04.557 "method": "bdev_nvme_attach_controller" 00:29:04.557 } 00:29:04.557 EOF 00:29:04.557 )") 00:29:04.557 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:04.557 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:04.557 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:04.557 { 00:29:04.557 "params": { 00:29:04.557 "name": "Nvme$subsystem", 00:29:04.557 "trtype": "$TEST_TRANSPORT", 00:29:04.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.557 "adrfam": "ipv4", 00:29:04.557 "trsvcid": "$NVMF_PORT", 00:29:04.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.557 "hdgst": ${hdgst:-false}, 00:29:04.557 "ddgst": ${ddgst:-false} 00:29:04.557 }, 00:29:04.557 "method": "bdev_nvme_attach_controller" 00:29:04.557 } 00:29:04.557 EOF 00:29:04.557 )") 00:29:04.557 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:04.557 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:04.557 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:04.557 { 00:29:04.557 "params": { 00:29:04.557 "name": "Nvme$subsystem", 00:29:04.557 "trtype": "$TEST_TRANSPORT", 00:29:04.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.557 "adrfam": "ipv4", 00:29:04.557 "trsvcid": "$NVMF_PORT", 00:29:04.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.557 "hdgst": ${hdgst:-false}, 00:29:04.557 "ddgst": ${ddgst:-false} 00:29:04.557 }, 00:29:04.557 "method": "bdev_nvme_attach_controller" 00:29:04.557 } 00:29:04.557 EOF 00:29:04.557 )") 00:29:04.558 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:04.558 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:04.558 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:04.558 { 00:29:04.558 "params": { 00:29:04.558 "name": "Nvme$subsystem", 00:29:04.558 "trtype": "$TEST_TRANSPORT", 00:29:04.558 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.558 "adrfam": "ipv4", 00:29:04.558 "trsvcid": "$NVMF_PORT", 00:29:04.558 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.558 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.558 "hdgst": ${hdgst:-false}, 00:29:04.558 "ddgst": ${ddgst:-false} 00:29:04.558 }, 00:29:04.558 "method": "bdev_nvme_attach_controller" 00:29:04.558 } 00:29:04.558 EOF 00:29:04.558 )") 00:29:04.558 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:04.558 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:29:04.558 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:29:04.558 03:40:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:04.558 "params": { 00:29:04.558 "name": "Nvme1", 00:29:04.558 "trtype": "tcp", 00:29:04.558 "traddr": "10.0.0.2", 00:29:04.558 "adrfam": "ipv4", 00:29:04.558 "trsvcid": "4420", 00:29:04.558 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:04.558 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:04.558 "hdgst": false, 00:29:04.558 "ddgst": false 00:29:04.558 }, 00:29:04.558 "method": "bdev_nvme_attach_controller" 00:29:04.558 },{ 00:29:04.558 "params": { 00:29:04.558 "name": "Nvme2", 00:29:04.558 "trtype": "tcp", 00:29:04.558 "traddr": "10.0.0.2", 00:29:04.558 "adrfam": "ipv4", 00:29:04.558 "trsvcid": "4420", 00:29:04.558 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:04.558 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:04.558 "hdgst": false, 00:29:04.558 "ddgst": false 00:29:04.558 }, 00:29:04.558 "method": "bdev_nvme_attach_controller" 00:29:04.558 },{ 00:29:04.558 "params": { 00:29:04.558 "name": "Nvme3", 00:29:04.558 "trtype": "tcp", 00:29:04.558 "traddr": "10.0.0.2", 00:29:04.558 "adrfam": "ipv4", 00:29:04.558 "trsvcid": "4420", 00:29:04.558 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:04.558 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:04.558 "hdgst": false, 00:29:04.558 "ddgst": false 00:29:04.558 }, 00:29:04.558 "method": "bdev_nvme_attach_controller" 00:29:04.558 },{ 00:29:04.558 "params": { 00:29:04.558 "name": "Nvme4", 00:29:04.558 "trtype": "tcp", 00:29:04.558 "traddr": "10.0.0.2", 00:29:04.558 "adrfam": "ipv4", 00:29:04.558 "trsvcid": "4420", 00:29:04.558 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:04.558 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:04.558 "hdgst": false, 00:29:04.558 "ddgst": false 00:29:04.558 }, 00:29:04.558 "method": "bdev_nvme_attach_controller" 00:29:04.558 },{ 00:29:04.558 "params": { 00:29:04.558 "name": "Nvme5", 00:29:04.558 "trtype": "tcp", 00:29:04.558 "traddr": "10.0.0.2", 00:29:04.558 "adrfam": "ipv4", 00:29:04.558 "trsvcid": "4420", 00:29:04.558 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:04.558 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:04.558 "hdgst": false, 00:29:04.558 "ddgst": false 00:29:04.558 }, 00:29:04.558 "method": "bdev_nvme_attach_controller" 00:29:04.558 },{ 00:29:04.558 "params": { 00:29:04.558 "name": "Nvme6", 00:29:04.558 "trtype": "tcp", 00:29:04.558 "traddr": "10.0.0.2", 00:29:04.558 "adrfam": "ipv4", 00:29:04.558 "trsvcid": "4420", 00:29:04.558 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:04.558 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:04.558 "hdgst": false, 00:29:04.558 "ddgst": false 00:29:04.558 }, 00:29:04.558 "method": "bdev_nvme_attach_controller" 00:29:04.558 },{ 00:29:04.558 "params": { 00:29:04.558 "name": "Nvme7", 00:29:04.558 "trtype": "tcp", 00:29:04.558 "traddr": "10.0.0.2", 00:29:04.558 "adrfam": "ipv4", 00:29:04.558 "trsvcid": "4420", 00:29:04.558 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:04.558 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:04.558 "hdgst": false, 00:29:04.558 "ddgst": false 00:29:04.558 }, 00:29:04.558 "method": "bdev_nvme_attach_controller" 00:29:04.558 },{ 00:29:04.558 "params": { 00:29:04.558 "name": "Nvme8", 00:29:04.558 "trtype": "tcp", 00:29:04.558 "traddr": "10.0.0.2", 00:29:04.558 "adrfam": "ipv4", 00:29:04.558 "trsvcid": "4420", 00:29:04.558 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:04.558 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:04.558 "hdgst": false, 00:29:04.558 "ddgst": false 00:29:04.558 }, 00:29:04.558 "method": "bdev_nvme_attach_controller" 00:29:04.558 },{ 00:29:04.558 "params": { 00:29:04.558 "name": "Nvme9", 00:29:04.558 "trtype": "tcp", 00:29:04.558 "traddr": "10.0.0.2", 00:29:04.558 "adrfam": "ipv4", 00:29:04.558 "trsvcid": "4420", 00:29:04.558 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:04.558 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:04.558 "hdgst": false, 00:29:04.558 "ddgst": false 00:29:04.558 }, 00:29:04.558 "method": "bdev_nvme_attach_controller" 00:29:04.558 },{ 00:29:04.558 "params": { 00:29:04.558 "name": "Nvme10", 00:29:04.558 "trtype": "tcp", 00:29:04.558 "traddr": "10.0.0.2", 00:29:04.558 "adrfam": "ipv4", 00:29:04.558 "trsvcid": "4420", 00:29:04.558 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:04.558 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:04.558 "hdgst": false, 00:29:04.558 "ddgst": false 00:29:04.558 }, 00:29:04.558 "method": "bdev_nvme_attach_controller" 00:29:04.558 }' 00:29:04.558 [2024-12-13 03:40:05.528854] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:04.558 [2024-12-13 03:40:05.528951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2790994 ] 00:29:04.558 [2024-12-13 03:40:05.647962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.817 [2024-12-13 03:40:05.764855] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:06.724 Running I/O for 1 seconds... 00:29:07.552 1929.00 IOPS, 120.56 MiB/s 00:29:07.552 Latency(us) 00:29:07.552 [2024-12-13T02:40:08.761Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.552 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.552 Verification LBA range: start 0x0 length 0x400 00:29:07.552 Nvme1n1 : 1.14 223.80 13.99 0.00 0.00 282698.85 19972.88 239674.51 00:29:07.552 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.552 Verification LBA range: start 0x0 length 0x400 00:29:07.552 Nvme2n1 : 1.11 230.59 14.41 0.00 0.00 270713.90 18100.42 246665.02 00:29:07.553 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.553 Verification LBA range: start 0x0 length 0x400 00:29:07.553 Nvme3n1 : 1.18 271.23 16.95 0.00 0.00 224968.51 16103.13 243669.09 00:29:07.553 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.553 Verification LBA range: start 0x0 length 0x400 00:29:07.553 Nvme4n1 : 1.19 269.62 16.85 0.00 0.00 225086.32 16227.96 254654.17 00:29:07.553 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.553 Verification LBA range: start 0x0 length 0x400 00:29:07.553 Nvme5n1 : 1.10 241.20 15.08 0.00 0.00 241192.25 7084.13 238675.87 00:29:07.553 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.553 Verification LBA range: start 0x0 length 0x400 00:29:07.553 Nvme6n1 : 1.12 228.68 14.29 0.00 0.00 256197.73 20846.69 241671.80 00:29:07.553 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.553 Verification LBA range: start 0x0 length 0x400 00:29:07.553 Nvme7n1 : 1.19 268.35 16.77 0.00 0.00 216087.89 14293.09 250659.60 00:29:07.553 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.553 Verification LBA range: start 0x0 length 0x400 00:29:07.553 Nvme8n1 : 1.20 266.76 16.67 0.00 0.00 214255.32 16727.28 239674.51 00:29:07.553 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.553 Verification LBA range: start 0x0 length 0x400 00:29:07.553 Nvme9n1 : 1.17 218.08 13.63 0.00 0.00 257312.67 20597.03 248662.31 00:29:07.553 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.553 Verification LBA range: start 0x0 length 0x400 00:29:07.553 Nvme10n1 : 1.20 265.86 16.62 0.00 0.00 208494.30 14854.83 263641.97 00:29:07.553 [2024-12-13T02:40:08.762Z] =================================================================================================================== 00:29:07.553 [2024-12-13T02:40:08.762Z] Total : 2484.16 155.26 0.00 0.00 237278.55 7084.13 263641.97 00:29:08.491 03:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:29:08.491 03:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:08.491 03:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:08.491 03:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:08.491 03:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:08.491 03:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:08.491 03:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:29:08.491 03:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:08.491 03:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:29:08.491 03:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:08.491 03:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:08.491 rmmod nvme_tcp 00:29:08.750 rmmod nvme_fabrics 00:29:08.750 rmmod nvme_keyring 00:29:08.750 03:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:08.750 03:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:29:08.750 03:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:29:08.750 03:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2790216 ']' 00:29:08.750 03:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2790216 00:29:08.750 03:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2790216 ']' 00:29:08.750 03:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2790216 00:29:08.750 03:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:29:08.750 03:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:08.750 03:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2790216 00:29:08.750 03:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:08.750 03:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:08.750 03:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2790216' 00:29:08.750 killing process with pid 2790216 00:29:08.750 03:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2790216 00:29:08.750 03:40:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2790216 00:29:12.144 03:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:12.144 03:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:12.144 03:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:12.144 03:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:29:12.144 03:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:29:12.144 03:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:12.144 03:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:29:12.144 03:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:12.144 03:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:12.144 03:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.144 03:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:12.144 03:40:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:14.070 00:29:14.070 real 0m19.803s 00:29:14.070 user 0m53.738s 00:29:14.070 sys 0m5.811s 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:14.070 ************************************ 00:29:14.070 END TEST nvmf_shutdown_tc1 00:29:14.070 ************************************ 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:14.070 ************************************ 00:29:14.070 START TEST nvmf_shutdown_tc2 00:29:14.070 ************************************ 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:14.070 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:14.071 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:14.071 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:14.071 Found net devices under 0000:af:00.0: cvl_0_0 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:14.071 Found net devices under 0000:af:00.1: cvl_0_1 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:14.071 03:40:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:14.071 03:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:14.071 03:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:14.071 03:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:14.071 03:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:14.071 03:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:14.071 03:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:14.071 03:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:14.071 03:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:14.071 03:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:14.071 03:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:14.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:14.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:29:14.329 00:29:14.329 --- 10.0.0.2 ping statistics --- 00:29:14.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.329 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:29:14.329 03:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:14.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:14.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:29:14.329 00:29:14.329 --- 10.0.0.1 ping statistics --- 00:29:14.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.329 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:29:14.329 03:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:14.329 03:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:29:14.329 03:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:14.329 03:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:14.329 03:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:14.329 03:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:14.329 03:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:14.329 03:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:14.329 03:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:14.329 03:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:14.329 03:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:14.329 03:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:14.329 03:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:14.329 03:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2792652 00:29:14.329 03:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2792652 00:29:14.329 03:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:14.329 03:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2792652 ']' 00:29:14.329 03:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:14.329 03:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:14.329 03:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:14.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:14.329 03:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:14.329 03:40:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:14.329 [2024-12-13 03:40:15.396318] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:14.329 [2024-12-13 03:40:15.396409] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:14.329 [2024-12-13 03:40:15.509478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:14.588 [2024-12-13 03:40:15.621631] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:14.588 [2024-12-13 03:40:15.621678] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:14.588 [2024-12-13 03:40:15.621690] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:14.588 [2024-12-13 03:40:15.621701] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:14.588 [2024-12-13 03:40:15.621710] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:14.588 [2024-12-13 03:40:15.624281] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:14.588 [2024-12-13 03:40:15.624354] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:14.588 [2024-12-13 03:40:15.624432] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:14.588 [2024-12-13 03:40:15.624454] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.163 [2024-12-13 03:40:16.246255] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.163 03:40:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.422 Malloc1 00:29:15.422 [2024-12-13 03:40:16.416174] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:15.422 Malloc2 00:29:15.422 Malloc3 00:29:15.681 Malloc4 00:29:15.681 Malloc5 00:29:15.940 Malloc6 00:29:15.940 Malloc7 00:29:15.940 Malloc8 00:29:16.200 Malloc9 00:29:16.200 Malloc10 00:29:16.200 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.200 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:16.200 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:16.200 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:16.200 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2793125 00:29:16.200 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2793125 /var/tmp/bdevperf.sock 00:29:16.200 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2793125 ']' 00:29:16.200 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:16.200 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:16.200 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:16.200 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:16.200 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:16.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:16.200 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:29:16.200 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:16.200 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:29:16.200 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:16.200 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.200 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.200 { 00:29:16.200 "params": { 00:29:16.200 "name": "Nvme$subsystem", 00:29:16.200 "trtype": "$TEST_TRANSPORT", 00:29:16.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.200 "adrfam": "ipv4", 00:29:16.200 "trsvcid": "$NVMF_PORT", 00:29:16.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.200 "hdgst": ${hdgst:-false}, 00:29:16.200 "ddgst": ${ddgst:-false} 00:29:16.200 }, 00:29:16.200 "method": "bdev_nvme_attach_controller" 00:29:16.200 } 00:29:16.200 EOF 00:29:16.200 )") 00:29:16.200 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:16.200 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.200 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.200 { 00:29:16.200 "params": { 00:29:16.200 "name": "Nvme$subsystem", 00:29:16.200 "trtype": "$TEST_TRANSPORT", 00:29:16.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.200 "adrfam": "ipv4", 00:29:16.200 "trsvcid": "$NVMF_PORT", 00:29:16.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.200 "hdgst": ${hdgst:-false}, 00:29:16.200 "ddgst": ${ddgst:-false} 00:29:16.200 }, 00:29:16.200 "method": "bdev_nvme_attach_controller" 00:29:16.200 } 00:29:16.200 EOF 00:29:16.200 )") 00:29:16.200 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:16.200 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.200 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.200 { 00:29:16.200 "params": { 00:29:16.200 "name": "Nvme$subsystem", 00:29:16.200 "trtype": "$TEST_TRANSPORT", 00:29:16.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.200 "adrfam": "ipv4", 00:29:16.200 "trsvcid": "$NVMF_PORT", 00:29:16.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.200 "hdgst": ${hdgst:-false}, 00:29:16.200 "ddgst": ${ddgst:-false} 00:29:16.200 }, 00:29:16.200 "method": "bdev_nvme_attach_controller" 00:29:16.200 } 00:29:16.200 EOF 00:29:16.200 )") 00:29:16.200 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:16.200 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.200 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.200 { 00:29:16.200 "params": { 00:29:16.200 "name": "Nvme$subsystem", 00:29:16.200 "trtype": "$TEST_TRANSPORT", 00:29:16.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.200 "adrfam": "ipv4", 00:29:16.200 "trsvcid": "$NVMF_PORT", 00:29:16.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.200 "hdgst": ${hdgst:-false}, 00:29:16.200 "ddgst": ${ddgst:-false} 00:29:16.200 }, 00:29:16.200 "method": "bdev_nvme_attach_controller" 00:29:16.200 } 00:29:16.200 EOF 00:29:16.200 )") 00:29:16.200 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:16.460 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.460 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.460 { 00:29:16.460 "params": { 00:29:16.460 "name": "Nvme$subsystem", 00:29:16.460 "trtype": "$TEST_TRANSPORT", 00:29:16.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.460 "adrfam": "ipv4", 00:29:16.460 "trsvcid": "$NVMF_PORT", 00:29:16.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.460 "hdgst": ${hdgst:-false}, 00:29:16.460 "ddgst": ${ddgst:-false} 00:29:16.460 }, 00:29:16.460 "method": "bdev_nvme_attach_controller" 00:29:16.460 } 00:29:16.460 EOF 00:29:16.460 )") 00:29:16.460 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:16.460 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.460 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.460 { 00:29:16.460 "params": { 00:29:16.460 "name": "Nvme$subsystem", 00:29:16.460 "trtype": "$TEST_TRANSPORT", 00:29:16.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.460 "adrfam": "ipv4", 00:29:16.460 "trsvcid": "$NVMF_PORT", 00:29:16.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.460 "hdgst": ${hdgst:-false}, 00:29:16.460 "ddgst": ${ddgst:-false} 00:29:16.460 }, 00:29:16.460 "method": "bdev_nvme_attach_controller" 00:29:16.460 } 00:29:16.460 EOF 00:29:16.460 )") 00:29:16.460 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:16.460 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.460 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.460 { 00:29:16.460 "params": { 00:29:16.460 "name": "Nvme$subsystem", 00:29:16.460 "trtype": "$TEST_TRANSPORT", 00:29:16.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.460 "adrfam": "ipv4", 00:29:16.460 "trsvcid": "$NVMF_PORT", 00:29:16.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.460 "hdgst": ${hdgst:-false}, 00:29:16.460 "ddgst": ${ddgst:-false} 00:29:16.460 }, 00:29:16.460 "method": "bdev_nvme_attach_controller" 00:29:16.460 } 00:29:16.460 EOF 00:29:16.460 )") 00:29:16.460 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:16.460 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.460 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.460 { 00:29:16.460 "params": { 00:29:16.460 "name": "Nvme$subsystem", 00:29:16.460 "trtype": "$TEST_TRANSPORT", 00:29:16.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.460 "adrfam": "ipv4", 00:29:16.460 "trsvcid": "$NVMF_PORT", 00:29:16.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.460 "hdgst": ${hdgst:-false}, 00:29:16.460 "ddgst": ${ddgst:-false} 00:29:16.460 }, 00:29:16.460 "method": "bdev_nvme_attach_controller" 00:29:16.460 } 00:29:16.460 EOF 00:29:16.460 )") 00:29:16.460 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:16.460 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.460 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.460 { 00:29:16.460 "params": { 00:29:16.460 "name": "Nvme$subsystem", 00:29:16.460 "trtype": "$TEST_TRANSPORT", 00:29:16.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.460 "adrfam": "ipv4", 00:29:16.460 "trsvcid": "$NVMF_PORT", 00:29:16.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.460 "hdgst": ${hdgst:-false}, 00:29:16.460 "ddgst": ${ddgst:-false} 00:29:16.460 }, 00:29:16.460 "method": "bdev_nvme_attach_controller" 00:29:16.460 } 00:29:16.460 EOF 00:29:16.460 )") 00:29:16.460 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:16.460 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:16.460 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:16.460 { 00:29:16.460 "params": { 00:29:16.460 "name": "Nvme$subsystem", 00:29:16.460 "trtype": "$TEST_TRANSPORT", 00:29:16.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.460 "adrfam": "ipv4", 00:29:16.460 "trsvcid": "$NVMF_PORT", 00:29:16.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.460 "hdgst": ${hdgst:-false}, 00:29:16.460 "ddgst": ${ddgst:-false} 00:29:16.460 }, 00:29:16.460 "method": "bdev_nvme_attach_controller" 00:29:16.460 } 00:29:16.460 EOF 00:29:16.460 )") 00:29:16.460 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:16.460 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:29:16.461 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:29:16.461 03:40:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:16.461 "params": { 00:29:16.461 "name": "Nvme1", 00:29:16.461 "trtype": "tcp", 00:29:16.461 "traddr": "10.0.0.2", 00:29:16.461 "adrfam": "ipv4", 00:29:16.461 "trsvcid": "4420", 00:29:16.461 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:16.461 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:16.461 "hdgst": false, 00:29:16.461 "ddgst": false 00:29:16.461 }, 00:29:16.461 "method": "bdev_nvme_attach_controller" 00:29:16.461 },{ 00:29:16.461 "params": { 00:29:16.461 "name": "Nvme2", 00:29:16.461 "trtype": "tcp", 00:29:16.461 "traddr": "10.0.0.2", 00:29:16.461 "adrfam": "ipv4", 00:29:16.461 "trsvcid": "4420", 00:29:16.461 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:16.461 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:16.461 "hdgst": false, 00:29:16.461 "ddgst": false 00:29:16.461 }, 00:29:16.461 "method": "bdev_nvme_attach_controller" 00:29:16.461 },{ 00:29:16.461 "params": { 00:29:16.461 "name": "Nvme3", 00:29:16.461 "trtype": "tcp", 00:29:16.461 "traddr": "10.0.0.2", 00:29:16.461 "adrfam": "ipv4", 00:29:16.461 "trsvcid": "4420", 00:29:16.461 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:16.461 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:16.461 "hdgst": false, 00:29:16.461 "ddgst": false 00:29:16.461 }, 00:29:16.461 "method": "bdev_nvme_attach_controller" 00:29:16.461 },{ 00:29:16.461 "params": { 00:29:16.461 "name": "Nvme4", 00:29:16.461 "trtype": "tcp", 00:29:16.461 "traddr": "10.0.0.2", 00:29:16.461 "adrfam": "ipv4", 00:29:16.461 "trsvcid": "4420", 00:29:16.461 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:16.461 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:16.461 "hdgst": false, 00:29:16.461 "ddgst": false 00:29:16.461 }, 00:29:16.461 "method": "bdev_nvme_attach_controller" 00:29:16.461 },{ 00:29:16.461 "params": { 00:29:16.461 "name": "Nvme5", 00:29:16.461 "trtype": "tcp", 00:29:16.461 "traddr": "10.0.0.2", 00:29:16.461 "adrfam": "ipv4", 00:29:16.461 "trsvcid": "4420", 00:29:16.461 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:16.461 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:16.461 "hdgst": false, 00:29:16.461 "ddgst": false 00:29:16.461 }, 00:29:16.461 "method": "bdev_nvme_attach_controller" 00:29:16.461 },{ 00:29:16.461 "params": { 00:29:16.461 "name": "Nvme6", 00:29:16.461 "trtype": "tcp", 00:29:16.461 "traddr": "10.0.0.2", 00:29:16.461 "adrfam": "ipv4", 00:29:16.461 "trsvcid": "4420", 00:29:16.461 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:16.461 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:16.461 "hdgst": false, 00:29:16.461 "ddgst": false 00:29:16.461 }, 00:29:16.461 "method": "bdev_nvme_attach_controller" 00:29:16.461 },{ 00:29:16.461 "params": { 00:29:16.461 "name": "Nvme7", 00:29:16.461 "trtype": "tcp", 00:29:16.461 "traddr": "10.0.0.2", 00:29:16.461 "adrfam": "ipv4", 00:29:16.461 "trsvcid": "4420", 00:29:16.461 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:16.461 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:16.461 "hdgst": false, 00:29:16.461 "ddgst": false 00:29:16.461 }, 00:29:16.461 "method": "bdev_nvme_attach_controller" 00:29:16.461 },{ 00:29:16.461 "params": { 00:29:16.461 "name": "Nvme8", 00:29:16.461 "trtype": "tcp", 00:29:16.461 "traddr": "10.0.0.2", 00:29:16.461 "adrfam": "ipv4", 00:29:16.461 "trsvcid": "4420", 00:29:16.461 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:16.461 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:16.461 "hdgst": false, 00:29:16.461 "ddgst": false 00:29:16.461 }, 00:29:16.461 "method": "bdev_nvme_attach_controller" 00:29:16.461 },{ 00:29:16.461 "params": { 00:29:16.461 "name": "Nvme9", 00:29:16.461 "trtype": "tcp", 00:29:16.461 "traddr": "10.0.0.2", 00:29:16.461 "adrfam": "ipv4", 00:29:16.461 "trsvcid": "4420", 00:29:16.461 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:16.461 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:16.461 "hdgst": false, 00:29:16.461 "ddgst": false 00:29:16.461 }, 00:29:16.461 "method": "bdev_nvme_attach_controller" 00:29:16.461 },{ 00:29:16.461 "params": { 00:29:16.461 "name": "Nvme10", 00:29:16.461 "trtype": "tcp", 00:29:16.461 "traddr": "10.0.0.2", 00:29:16.461 "adrfam": "ipv4", 00:29:16.461 "trsvcid": "4420", 00:29:16.461 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:16.461 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:16.461 "hdgst": false, 00:29:16.461 "ddgst": false 00:29:16.461 }, 00:29:16.461 "method": "bdev_nvme_attach_controller" 00:29:16.461 }' 00:29:16.461 [2024-12-13 03:40:17.455424] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:16.461 [2024-12-13 03:40:17.455510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2793125 ] 00:29:16.461 [2024-12-13 03:40:17.569591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.720 [2024-12-13 03:40:17.682913] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:18.624 Running I/O for 10 seconds... 00:29:18.883 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:18.883 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:18.883 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:18.883 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.883 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:18.883 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.883 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:18.883 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:18.883 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:18.883 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:29:18.883 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:29:18.883 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:18.883 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:18.883 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:18.883 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:18.883 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.883 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.142 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.142 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:19.142 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:19.142 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:19.401 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:19.401 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:19.401 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:19.401 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:19.401 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.401 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.401 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.401 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=195 00:29:19.401 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:29:19.401 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:29:19.401 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:29:19.401 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:29:19.401 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2793125 00:29:19.401 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2793125 ']' 00:29:19.401 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2793125 00:29:19.401 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:19.401 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:19.401 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2793125 00:29:19.401 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:19.401 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:19.401 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2793125' 00:29:19.401 killing process with pid 2793125 00:29:19.401 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2793125 00:29:19.401 03:40:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2793125 00:29:19.401 2205.00 IOPS, 137.81 MiB/s [2024-12-13T02:40:20.610Z] Received shutdown signal, test time was about 1.029572 seconds 00:29:19.401 00:29:19.401 Latency(us) 00:29:19.401 [2024-12-13T02:40:20.610Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.401 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.401 Verification LBA range: start 0x0 length 0x400 00:29:19.401 Nvme1n1 : 1.00 255.10 15.94 0.00 0.00 248317.32 18474.91 242670.45 00:29:19.401 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.401 Verification LBA range: start 0x0 length 0x400 00:29:19.401 Nvme2n1 : 1.01 254.57 15.91 0.00 0.00 243387.98 19223.89 246665.02 00:29:19.401 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.401 Verification LBA range: start 0x0 length 0x400 00:29:19.401 Nvme3n1 : 0.98 265.44 16.59 0.00 0.00 229247.69 1880.26 238675.87 00:29:19.401 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.401 Verification LBA range: start 0x0 length 0x400 00:29:19.401 Nvme4n1 : 0.99 261.93 16.37 0.00 0.00 228915.15 3916.56 247663.66 00:29:19.401 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.401 Verification LBA range: start 0x0 length 0x400 00:29:19.401 Nvme5n1 : 1.03 248.81 15.55 0.00 0.00 237858.62 33704.23 245666.38 00:29:19.401 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.401 Verification LBA range: start 0x0 length 0x400 00:29:19.401 Nvme6n1 : 0.99 272.64 17.04 0.00 0.00 209373.19 6553.60 225693.50 00:29:19.401 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.401 Verification LBA range: start 0x0 length 0x400 00:29:19.401 Nvme7n1 : 1.02 254.25 15.89 0.00 0.00 223748.79 2418.59 247663.66 00:29:19.401 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.401 Verification LBA range: start 0x0 length 0x400 00:29:19.401 Nvme8n1 : 1.03 249.41 15.59 0.00 0.00 224662.92 17476.27 247663.66 00:29:19.402 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.402 Verification LBA range: start 0x0 length 0x400 00:29:19.402 Nvme9n1 : 0.97 197.17 12.32 0.00 0.00 275075.98 21470.84 250659.60 00:29:19.402 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.402 Verification LBA range: start 0x0 length 0x400 00:29:19.402 Nvme10n1 : 0.97 197.82 12.36 0.00 0.00 269829.85 18100.42 263641.97 00:29:19.402 [2024-12-13T02:40:20.611Z] =================================================================================================================== 00:29:19.402 [2024-12-13T02:40:20.611Z] Total : 2457.13 153.57 0.00 0.00 237083.66 1880.26 263641.97 00:29:20.780 03:40:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:29:21.717 03:40:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2792652 00:29:21.717 03:40:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:29:21.717 03:40:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:21.717 03:40:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:21.717 03:40:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:21.717 03:40:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:21.717 03:40:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:21.717 03:40:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:29:21.717 03:40:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:21.717 03:40:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:29:21.717 03:40:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:21.717 03:40:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:21.717 rmmod nvme_tcp 00:29:21.717 rmmod nvme_fabrics 00:29:21.717 rmmod nvme_keyring 00:29:21.717 03:40:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:21.717 03:40:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:29:21.717 03:40:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:29:21.717 03:40:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2792652 ']' 00:29:21.717 03:40:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2792652 00:29:21.717 03:40:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2792652 ']' 00:29:21.717 03:40:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2792652 00:29:21.717 03:40:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:21.717 03:40:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:21.717 03:40:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2792652 00:29:21.717 03:40:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:21.717 03:40:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:21.717 03:40:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2792652' 00:29:21.717 killing process with pid 2792652 00:29:21.717 03:40:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2792652 00:29:21.717 03:40:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2792652 00:29:25.005 03:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:25.005 03:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:25.005 03:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:25.005 03:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:29:25.005 03:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:29:25.005 03:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:29:25.005 03:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:25.005 03:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:25.005 03:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:25.005 03:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.005 03:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:25.005 03:40:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:26.911 00:29:26.911 real 0m12.868s 00:29:26.911 user 0m43.544s 00:29:26.911 sys 0m1.734s 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.911 ************************************ 00:29:26.911 END TEST nvmf_shutdown_tc2 00:29:26.911 ************************************ 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:26.911 ************************************ 00:29:26.911 START TEST nvmf_shutdown_tc3 00:29:26.911 ************************************ 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:26.911 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:26.911 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:26.912 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:26.912 Found net devices under 0000:af:00.0: cvl_0_0 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:26.912 Found net devices under 0000:af:00.1: cvl_0_1 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:26.912 03:40:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:26.912 03:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:26.912 03:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:26.912 03:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:26.912 03:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:27.172 03:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:27.172 03:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:27.172 03:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:27.172 03:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:27.172 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:27.172 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:29:27.172 00:29:27.172 --- 10.0.0.2 ping statistics --- 00:29:27.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:27.172 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:29:27.172 03:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:27.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:27.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:29:27.172 00:29:27.172 --- 10.0.0.1 ping statistics --- 00:29:27.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:27.172 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:29:27.172 03:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:27.172 03:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:29:27.172 03:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:27.172 03:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:27.172 03:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:27.172 03:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:27.172 03:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:27.172 03:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:27.172 03:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:27.172 03:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:27.172 03:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:27.172 03:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:27.172 03:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:27.172 03:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2794969 00:29:27.172 03:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2794969 00:29:27.172 03:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:27.172 03:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2794969 ']' 00:29:27.172 03:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:27.172 03:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:27.172 03:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:27.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:27.172 03:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:27.172 03:40:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:27.172 [2024-12-13 03:40:28.357724] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:27.172 [2024-12-13 03:40:28.357811] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:27.431 [2024-12-13 03:40:28.474508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:27.431 [2024-12-13 03:40:28.581681] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:27.431 [2024-12-13 03:40:28.581727] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:27.431 [2024-12-13 03:40:28.581737] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:27.431 [2024-12-13 03:40:28.581748] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:27.431 [2024-12-13 03:40:28.581756] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:27.431 [2024-12-13 03:40:28.583991] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:27.431 [2024-12-13 03:40:28.584066] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:27.431 [2024-12-13 03:40:28.584149] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:27.431 [2024-12-13 03:40:28.584173] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:29:27.999 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:27.999 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:27.999 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:27.999 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:27.999 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:27.999 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:27.999 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:27.999 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.999 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:27.999 [2024-12-13 03:40:29.202622] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:28.258 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.258 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:28.258 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:28.258 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:28.258 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:28.258 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:28.258 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:28.258 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:28.258 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:28.258 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:28.258 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:28.258 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:28.258 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:28.258 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:28.258 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:28.258 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:28.258 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:28.258 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:28.258 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:28.258 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:28.258 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:28.258 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:28.258 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:28.258 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:28.258 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:28.258 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:28.258 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:28.258 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.258 03:40:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:28.258 Malloc1 00:29:28.258 [2024-12-13 03:40:29.373246] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:28.258 Malloc2 00:29:28.517 Malloc3 00:29:28.517 Malloc4 00:29:28.776 Malloc5 00:29:28.776 Malloc6 00:29:28.776 Malloc7 00:29:29.035 Malloc8 00:29:29.035 Malloc9 00:29:29.294 Malloc10 00:29:29.294 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.294 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:29.294 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:29.294 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:29.294 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2795311 00:29:29.294 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2795311 /var/tmp/bdevperf.sock 00:29:29.294 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2795311 ']' 00:29:29.294 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:29.294 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:29.294 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:29.294 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:29.294 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:29.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:29.294 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:29:29.294 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:29.295 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:29.295 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:29:29.295 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:29.295 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:29.295 { 00:29:29.295 "params": { 00:29:29.295 "name": "Nvme$subsystem", 00:29:29.295 "trtype": "$TEST_TRANSPORT", 00:29:29.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.295 "adrfam": "ipv4", 00:29:29.295 "trsvcid": "$NVMF_PORT", 00:29:29.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.295 "hdgst": ${hdgst:-false}, 00:29:29.295 "ddgst": ${ddgst:-false} 00:29:29.295 }, 00:29:29.295 "method": "bdev_nvme_attach_controller" 00:29:29.295 } 00:29:29.295 EOF 00:29:29.295 )") 00:29:29.295 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:29.295 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:29.295 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:29.295 { 00:29:29.295 "params": { 00:29:29.295 "name": "Nvme$subsystem", 00:29:29.295 "trtype": "$TEST_TRANSPORT", 00:29:29.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.295 "adrfam": "ipv4", 00:29:29.295 "trsvcid": "$NVMF_PORT", 00:29:29.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.295 "hdgst": ${hdgst:-false}, 00:29:29.295 "ddgst": ${ddgst:-false} 00:29:29.295 }, 00:29:29.295 "method": "bdev_nvme_attach_controller" 00:29:29.295 } 00:29:29.295 EOF 00:29:29.295 )") 00:29:29.295 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:29.295 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:29.295 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:29.295 { 00:29:29.295 "params": { 00:29:29.295 "name": "Nvme$subsystem", 00:29:29.295 "trtype": "$TEST_TRANSPORT", 00:29:29.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.295 "adrfam": "ipv4", 00:29:29.295 "trsvcid": "$NVMF_PORT", 00:29:29.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.295 "hdgst": ${hdgst:-false}, 00:29:29.295 "ddgst": ${ddgst:-false} 00:29:29.295 }, 00:29:29.295 "method": "bdev_nvme_attach_controller" 00:29:29.295 } 00:29:29.295 EOF 00:29:29.295 )") 00:29:29.295 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:29.295 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:29.295 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:29.295 { 00:29:29.295 "params": { 00:29:29.295 "name": "Nvme$subsystem", 00:29:29.295 "trtype": "$TEST_TRANSPORT", 00:29:29.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.295 "adrfam": "ipv4", 00:29:29.295 "trsvcid": "$NVMF_PORT", 00:29:29.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.295 "hdgst": ${hdgst:-false}, 00:29:29.295 "ddgst": ${ddgst:-false} 00:29:29.295 }, 00:29:29.295 "method": "bdev_nvme_attach_controller" 00:29:29.295 } 00:29:29.295 EOF 00:29:29.295 )") 00:29:29.295 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:29.295 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:29.295 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:29.295 { 00:29:29.295 "params": { 00:29:29.295 "name": "Nvme$subsystem", 00:29:29.295 "trtype": "$TEST_TRANSPORT", 00:29:29.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.295 "adrfam": "ipv4", 00:29:29.295 "trsvcid": "$NVMF_PORT", 00:29:29.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.295 "hdgst": ${hdgst:-false}, 00:29:29.295 "ddgst": ${ddgst:-false} 00:29:29.295 }, 00:29:29.295 "method": "bdev_nvme_attach_controller" 00:29:29.295 } 00:29:29.295 EOF 00:29:29.295 )") 00:29:29.295 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:29.295 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:29.295 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:29.295 { 00:29:29.295 "params": { 00:29:29.295 "name": "Nvme$subsystem", 00:29:29.295 "trtype": "$TEST_TRANSPORT", 00:29:29.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.295 "adrfam": "ipv4", 00:29:29.295 "trsvcid": "$NVMF_PORT", 00:29:29.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.295 "hdgst": ${hdgst:-false}, 00:29:29.295 "ddgst": ${ddgst:-false} 00:29:29.295 }, 00:29:29.295 "method": "bdev_nvme_attach_controller" 00:29:29.295 } 00:29:29.295 EOF 00:29:29.295 )") 00:29:29.295 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:29.295 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:29.295 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:29.295 { 00:29:29.295 "params": { 00:29:29.295 "name": "Nvme$subsystem", 00:29:29.295 "trtype": "$TEST_TRANSPORT", 00:29:29.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.295 "adrfam": "ipv4", 00:29:29.295 "trsvcid": "$NVMF_PORT", 00:29:29.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.295 "hdgst": ${hdgst:-false}, 00:29:29.295 "ddgst": ${ddgst:-false} 00:29:29.295 }, 00:29:29.295 "method": "bdev_nvme_attach_controller" 00:29:29.295 } 00:29:29.295 EOF 00:29:29.295 )") 00:29:29.295 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:29.295 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:29.295 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:29.295 { 00:29:29.295 "params": { 00:29:29.295 "name": "Nvme$subsystem", 00:29:29.295 "trtype": "$TEST_TRANSPORT", 00:29:29.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.295 "adrfam": "ipv4", 00:29:29.295 "trsvcid": "$NVMF_PORT", 00:29:29.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.295 "hdgst": ${hdgst:-false}, 00:29:29.295 "ddgst": ${ddgst:-false} 00:29:29.295 }, 00:29:29.295 "method": "bdev_nvme_attach_controller" 00:29:29.295 } 00:29:29.295 EOF 00:29:29.295 )") 00:29:29.295 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:29.295 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:29.295 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:29.295 { 00:29:29.295 "params": { 00:29:29.295 "name": "Nvme$subsystem", 00:29:29.295 "trtype": "$TEST_TRANSPORT", 00:29:29.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.295 "adrfam": "ipv4", 00:29:29.295 "trsvcid": "$NVMF_PORT", 00:29:29.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.295 "hdgst": ${hdgst:-false}, 00:29:29.295 "ddgst": ${ddgst:-false} 00:29:29.295 }, 00:29:29.295 "method": "bdev_nvme_attach_controller" 00:29:29.295 } 00:29:29.295 EOF 00:29:29.295 )") 00:29:29.295 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:29.295 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:29.295 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:29.295 { 00:29:29.295 "params": { 00:29:29.295 "name": "Nvme$subsystem", 00:29:29.295 "trtype": "$TEST_TRANSPORT", 00:29:29.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.295 "adrfam": "ipv4", 00:29:29.295 "trsvcid": "$NVMF_PORT", 00:29:29.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.295 "hdgst": ${hdgst:-false}, 00:29:29.295 "ddgst": ${ddgst:-false} 00:29:29.295 }, 00:29:29.295 "method": "bdev_nvme_attach_controller" 00:29:29.295 } 00:29:29.295 EOF 00:29:29.295 )") 00:29:29.295 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:29.295 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:29:29.295 [2024-12-13 03:40:30.413910] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:29.295 [2024-12-13 03:40:30.414009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2795311 ] 00:29:29.295 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:29:29.295 03:40:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:29.295 "params": { 00:29:29.295 "name": "Nvme1", 00:29:29.295 "trtype": "tcp", 00:29:29.295 "traddr": "10.0.0.2", 00:29:29.295 "adrfam": "ipv4", 00:29:29.295 "trsvcid": "4420", 00:29:29.296 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:29.296 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:29.296 "hdgst": false, 00:29:29.296 "ddgst": false 00:29:29.296 }, 00:29:29.296 "method": "bdev_nvme_attach_controller" 00:29:29.296 },{ 00:29:29.296 "params": { 00:29:29.296 "name": "Nvme2", 00:29:29.296 "trtype": "tcp", 00:29:29.296 "traddr": "10.0.0.2", 00:29:29.296 "adrfam": "ipv4", 00:29:29.296 "trsvcid": "4420", 00:29:29.296 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:29.296 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:29.296 "hdgst": false, 00:29:29.296 "ddgst": false 00:29:29.296 }, 00:29:29.296 "method": "bdev_nvme_attach_controller" 00:29:29.296 },{ 00:29:29.296 "params": { 00:29:29.296 "name": "Nvme3", 00:29:29.296 "trtype": "tcp", 00:29:29.296 "traddr": "10.0.0.2", 00:29:29.296 "adrfam": "ipv4", 00:29:29.296 "trsvcid": "4420", 00:29:29.296 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:29.296 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:29.296 "hdgst": false, 00:29:29.296 "ddgst": false 00:29:29.296 }, 00:29:29.296 "method": "bdev_nvme_attach_controller" 00:29:29.296 },{ 00:29:29.296 "params": { 00:29:29.296 "name": "Nvme4", 00:29:29.296 "trtype": "tcp", 00:29:29.296 "traddr": "10.0.0.2", 00:29:29.296 "adrfam": "ipv4", 00:29:29.296 "trsvcid": "4420", 00:29:29.296 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:29.296 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:29.296 "hdgst": false, 00:29:29.296 "ddgst": false 00:29:29.296 }, 00:29:29.296 "method": "bdev_nvme_attach_controller" 00:29:29.296 },{ 00:29:29.296 "params": { 00:29:29.296 "name": "Nvme5", 00:29:29.296 "trtype": "tcp", 00:29:29.296 "traddr": "10.0.0.2", 00:29:29.296 "adrfam": "ipv4", 00:29:29.296 "trsvcid": "4420", 00:29:29.296 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:29.296 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:29.296 "hdgst": false, 00:29:29.296 "ddgst": false 00:29:29.296 }, 00:29:29.296 "method": "bdev_nvme_attach_controller" 00:29:29.296 },{ 00:29:29.296 "params": { 00:29:29.296 "name": "Nvme6", 00:29:29.296 "trtype": "tcp", 00:29:29.296 "traddr": "10.0.0.2", 00:29:29.296 "adrfam": "ipv4", 00:29:29.296 "trsvcid": "4420", 00:29:29.296 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:29.296 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:29.296 "hdgst": false, 00:29:29.296 "ddgst": false 00:29:29.296 }, 00:29:29.296 "method": "bdev_nvme_attach_controller" 00:29:29.296 },{ 00:29:29.296 "params": { 00:29:29.296 "name": "Nvme7", 00:29:29.296 "trtype": "tcp", 00:29:29.296 "traddr": "10.0.0.2", 00:29:29.296 "adrfam": "ipv4", 00:29:29.296 "trsvcid": "4420", 00:29:29.296 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:29.296 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:29.296 "hdgst": false, 00:29:29.296 "ddgst": false 00:29:29.296 }, 00:29:29.296 "method": "bdev_nvme_attach_controller" 00:29:29.296 },{ 00:29:29.296 "params": { 00:29:29.296 "name": "Nvme8", 00:29:29.296 "trtype": "tcp", 00:29:29.296 "traddr": "10.0.0.2", 00:29:29.296 "adrfam": "ipv4", 00:29:29.296 "trsvcid": "4420", 00:29:29.296 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:29.296 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:29.296 "hdgst": false, 00:29:29.296 "ddgst": false 00:29:29.296 }, 00:29:29.296 "method": "bdev_nvme_attach_controller" 00:29:29.296 },{ 00:29:29.296 "params": { 00:29:29.296 "name": "Nvme9", 00:29:29.296 "trtype": "tcp", 00:29:29.296 "traddr": "10.0.0.2", 00:29:29.296 "adrfam": "ipv4", 00:29:29.296 "trsvcid": "4420", 00:29:29.296 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:29.296 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:29.296 "hdgst": false, 00:29:29.296 "ddgst": false 00:29:29.296 }, 00:29:29.296 "method": "bdev_nvme_attach_controller" 00:29:29.296 },{ 00:29:29.296 "params": { 00:29:29.296 "name": "Nvme10", 00:29:29.296 "trtype": "tcp", 00:29:29.296 "traddr": "10.0.0.2", 00:29:29.296 "adrfam": "ipv4", 00:29:29.296 "trsvcid": "4420", 00:29:29.296 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:29.296 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:29.296 "hdgst": false, 00:29:29.296 "ddgst": false 00:29:29.296 }, 00:29:29.296 "method": "bdev_nvme_attach_controller" 00:29:29.296 }' 00:29:29.555 [2024-12-13 03:40:30.530139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.555 [2024-12-13 03:40:30.637793] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.461 Running I/O for 10 seconds... 00:29:32.044 03:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:32.044 03:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:32.044 03:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:32.044 03:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.044 03:40:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:32.044 03:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.044 03:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:32.044 03:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:32.044 03:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:32.044 03:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:32.044 03:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:29:32.044 03:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:29:32.044 03:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:32.044 03:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:32.044 03:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:32.044 03:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.044 03:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:32.044 03:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:32.044 03:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.044 03:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:32.044 03:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:32.044 03:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:29:32.044 03:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:29:32.044 03:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:29:32.044 03:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2794969 00:29:32.044 03:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2794969 ']' 00:29:32.044 03:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2794969 00:29:32.044 03:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:29:32.044 03:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:32.044 03:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2794969 00:29:32.044 03:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:32.044 03:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:32.044 03:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2794969' 00:29:32.044 killing process with pid 2794969 00:29:32.044 03:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2794969 00:29:32.044 03:40:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2794969 00:29:32.044 [2024-12-13 03:40:33.110222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.044 [2024-12-13 03:40:33.110278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.044 [2024-12-13 03:40:33.110290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.044 [2024-12-13 03:40:33.110300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.044 [2024-12-13 03:40:33.110309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.044 [2024-12-13 03:40:33.110318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.044 [2024-12-13 03:40:33.110326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.044 [2024-12-13 03:40:33.110335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.044 [2024-12-13 03:40:33.110343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.044 [2024-12-13 03:40:33.110352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.044 [2024-12-13 03:40:33.110360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.110808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.113313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.113347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.113358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.113367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.113376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.113387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.113405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.113414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.113422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.113431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.113441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.113450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.113458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.113467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.113475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.113484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.113496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.113505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.113513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.113522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.113531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.113539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.113548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.113557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.113565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.113574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.113584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.113593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.113601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.113609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.113617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.045 [2024-12-13 03:40:33.113625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.113635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.113644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.113653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.113661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.113669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.113678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.113686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.113695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.113703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.113711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.113720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.113729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.113737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.113746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.113754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.113762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.113770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.113779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.113787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.113796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.113804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.113812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.113820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.113829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.113838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.113847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.113855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.113863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.113871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.113880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.113889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.046 [2024-12-13 03:40:33.116924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.047 [2024-12-13 03:40:33.116933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.047 [2024-12-13 03:40:33.116941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.047 [2024-12-13 03:40:33.116949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.047 [2024-12-13 03:40:33.116959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.047 [2024-12-13 03:40:33.116967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.047 [2024-12-13 03:40:33.116978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.047 [2024-12-13 03:40:33.116987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.047 [2024-12-13 03:40:33.116998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.047 [2024-12-13 03:40:33.117006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.047 [2024-12-13 03:40:33.117015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.047 [2024-12-13 03:40:33.117024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.047 [2024-12-13 03:40:33.117033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:32.047 [2024-12-13 03:40:33.118808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.047 [2024-12-13 03:40:33.118847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.047 [2024-12-13 03:40:33.118862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.047 [2024-12-13 03:40:33.118873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.047 [2024-12-13 03:40:33.118885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.047 [2024-12-13 03:40:33.118895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.047 [2024-12-13 03:40:33.118906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.047 [2024-12-13 03:40:33.118923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.047 [2024-12-13 03:40:33.118933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032b480 is same with the state(6) to be set 00:29:32.047 [2024-12-13 03:40:33.119000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.047 [2024-12-13 03:40:33.119014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.047 [2024-12-13 03:40:33.119037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.047 [2024-12-13 03:40:33.119047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.047 [2024-12-13 03:40:33.119059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.047 [2024-12-13 03:40:33.119070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.047 [2024-12-13 03:40:33.119080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.047 [2024-12-13 03:40:33.119090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.047 [2024-12-13 03:40:33.119099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000326480 is same with the state(6) to be set 00:29:32.047 [2024-12-13 03:40:33.119173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.047 [2024-12-13 03:40:33.119186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.047 [2024-12-13 03:40:33.119200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.047 [2024-12-13 03:40:33.119210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.047 [2024-12-13 03:40:33.119222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.047 [2024-12-13 03:40:33.119231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.047 [2024-12-13 03:40:33.119242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.047 [2024-12-13 03:40:33.119251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.047 [2024-12-13 03:40:33.119261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:29:32.047 [2024-12-13 03:40:33.121499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.047 [2024-12-13 03:40:33.121531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.047 [2024-12-13 03:40:33.121557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.047 [2024-12-13 03:40:33.121568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.047 [2024-12-13 03:40:33.121583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.047 [2024-12-13 03:40:33.121595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.047 [2024-12-13 03:40:33.121608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.047 [2024-12-13 03:40:33.121619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.047 [2024-12-13 03:40:33.121635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.047 [2024-12-13 03:40:33.121646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.047 [2024-12-13 03:40:33.121658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.047 [2024-12-13 03:40:33.121668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.047 [2024-12-13 03:40:33.121681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.047 [2024-12-13 03:40:33.121692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.047 [2024-12-13 03:40:33.121704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.047 [2024-12-13 03:40:33.121713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.047 [2024-12-13 03:40:33.121725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.047 [2024-12-13 03:40:33.121736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.047 [2024-12-13 03:40:33.121752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.047 [2024-12-13 03:40:33.121763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.047 [2024-12-13 03:40:33.121775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.047 [2024-12-13 03:40:33.121785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.047 [2024-12-13 03:40:33.121799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.047 [2024-12-13 03:40:33.121809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.047 [2024-12-13 03:40:33.121821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.047 [2024-12-13 03:40:33.121830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.047 [2024-12-13 03:40:33.121844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.047 [2024-12-13 03:40:33.121856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.047 [2024-12-13 03:40:33.121868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.047 [2024-12-13 03:40:33.121877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.047 [2024-12-13 03:40:33.121889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.047 [2024-12-13 03:40:33.121899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.047 [2024-12-13 03:40:33.121912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.047 [2024-12-13 03:40:33.121936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.047 [2024-12-13 03:40:33.121948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.047 [2024-12-13 03:40:33.121958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.047 [2024-12-13 03:40:33.121971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.047 [2024-12-13 03:40:33.121980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.047 [2024-12-13 03:40:33.121992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.047 [2024-12-13 03:40:33.122003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.047 [2024-12-13 03:40:33.122015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.047 [2024-12-13 03:40:33.122024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.048 [2024-12-13 03:40:33.122035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.048 [2024-12-13 03:40:33.122046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.048 [2024-12-13 03:40:33.122058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.048 [2024-12-13 03:40:33.122068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.048 [2024-12-13 03:40:33.122079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.048 [2024-12-13 03:40:33.122089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.048 [2024-12-13 03:40:33.122100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.048 [2024-12-13 03:40:33.122110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.048 [2024-12-13 03:40:33.122122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.048 [2024-12-13 03:40:33.122131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.048 [2024-12-13 03:40:33.122142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.048 [2024-12-13 03:40:33.122151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.048 [2024-12-13 03:40:33.122163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.048 [2024-12-13 03:40:33.122173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.048 [2024-12-13 03:40:33.122184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.048 [2024-12-13 03:40:33.122193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.048 [2024-12-13 03:40:33.122204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.048 [2024-12-13 03:40:33.122216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.048 [2024-12-13 03:40:33.122228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.048 [2024-12-13 03:40:33.122239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.048 [2024-12-13 03:40:33.122251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.048 [2024-12-13 03:40:33.122261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.048 [2024-12-13 03:40:33.122273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.048 [2024-12-13 03:40:33.122283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.048 [2024-12-13 03:40:33.122295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.048 [2024-12-13 03:40:33.122305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.048 [2024-12-13 03:40:33.122322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.048 [2024-12-13 03:40:33.122333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.048 [2024-12-13 03:40:33.122344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.048 [2024-12-13 03:40:33.122354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.048 [2024-12-13 03:40:33.122366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.048 [2024-12-13 03:40:33.122377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.048 [2024-12-13 03:40:33.122389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.048 [2024-12-13 03:40:33.122398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.048 [2024-12-13 03:40:33.122410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.048 [2024-12-13 03:40:33.122421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.048 [2024-12-13 03:40:33.122432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.048 [2024-12-13 03:40:33.122441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.048 [2024-12-13 03:40:33.122453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.048 [2024-12-13 03:40:33.122462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.048 [2024-12-13 03:40:33.122475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.048 [2024-12-13 03:40:33.122485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.048 [2024-12-13 03:40:33.122497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.048 [2024-12-13 03:40:33.122506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.048 [2024-12-13 03:40:33.122518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.048 [2024-12-13 03:40:33.122529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.048 [2024-12-13 03:40:33.122541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.048 [2024-12-13 03:40:33.122550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.048 [2024-12-13 03:40:33.122562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.048 [2024-12-13 03:40:33.122572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.048 [2024-12-13 03:40:33.122585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.048 [2024-12-13 03:40:33.122596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.048 [2024-12-13 03:40:33.122609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.048 [2024-12-13 03:40:33.122619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.048 [2024-12-13 03:40:33.122632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.048 [2024-12-13 03:40:33.122642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.048 [2024-12-13 03:40:33.122654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.048 [2024-12-13 03:40:33.122664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.048 [2024-12-13 03:40:33.122676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.048 [2024-12-13 03:40:33.122686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.048 [2024-12-13 03:40:33.122698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.048 [2024-12-13 03:40:33.122708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.048 [2024-12-13 03:40:33.122725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.048 [2024-12-13 03:40:33.122735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.048 [2024-12-13 03:40:33.122747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.048 [2024-12-13 03:40:33.122756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.048 [2024-12-13 03:40:33.122768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.048 [2024-12-13 03:40:33.122778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.049 [2024-12-13 03:40:33.122790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.049 [2024-12-13 03:40:33.122800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.049 [2024-12-13 03:40:33.122818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.049 [2024-12-13 03:40:33.122829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.049 [2024-12-13 03:40:33.122842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.049 [2024-12-13 03:40:33.122852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.049 [2024-12-13 03:40:33.122842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.049 [2024-12-13 03:40:33.122865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.049 [2024-12-13 03:40:33.122873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same [2024-12-13 03:40:33.122875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:32.049 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.049 [2024-12-13 03:40:33.122890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same [2024-12-13 03:40:33.122892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:12with the state(6) to be set 00:29:32.049 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.049 [2024-12-13 03:40:33.122903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.049 [2024-12-13 03:40:33.122904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.049 [2024-12-13 03:40:33.122922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.049 [2024-12-13 03:40:33.122913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.049 [2024-12-13 03:40:33.122933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.049 [2024-12-13 03:40:33.122937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:32.049 [2024-12-13 03:40:33.122945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:12[2024-12-13 03:40:33.122946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.049 with the state(6) to be set 00:29:32.049 [2024-12-13 03:40:33.122957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.049 [2024-12-13 03:40:33.122971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.049 [2024-12-13 03:40:33.122983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.049 [2024-12-13 03:40:33.122995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.049 [2024-12-13 03:40:33.123006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.049 [2024-12-13 03:40:33.123380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.049 [2024-12-13 03:40:33.123395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.049 [2024-12-13 03:40:33.123412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.049 [2024-12-13 03:40:33.123422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.049 [2024-12-13 03:40:33.123434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.049 [2024-12-13 03:40:33.123445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.049 [2024-12-13 03:40:33.123456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.049 [2024-12-13 03:40:33.123466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.049 [2024-12-13 03:40:33.123478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.049 [2024-12-13 03:40:33.123491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.049 [2024-12-13 03:40:33.123503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.049 [2024-12-13 03:40:33.123515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.049 [2024-12-13 03:40:33.123527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.049 [2024-12-13 03:40:33.123537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.049 [2024-12-13 03:40:33.123549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.049 [2024-12-13 03:40:33.123559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.049 [2024-12-13 03:40:33.123571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.049 [2024-12-13 03:40:33.123580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.049 [2024-12-13 03:40:33.123592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.049 [2024-12-13 03:40:33.123602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.049 [2024-12-13 03:40:33.123614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.049 [2024-12-13 03:40:33.123625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.049 [2024-12-13 03:40:33.123637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.049 [2024-12-13 03:40:33.123646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.049 [2024-12-13 03:40:33.123658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.049 [2024-12-13 03:40:33.123668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.049 [2024-12-13 03:40:33.123680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.049 [2024-12-13 03:40:33.123690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.049 [2024-12-13 03:40:33.123703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.049 [2024-12-13 03:40:33.123713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.049 [2024-12-13 03:40:33.123725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.049 [2024-12-13 03:40:33.123735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.049 [2024-12-13 03:40:33.123747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.049 [2024-12-13 03:40:33.123757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.049 [2024-12-13 03:40:33.123770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.049 [2024-12-13 03:40:33.123781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.049 [2024-12-13 03:40:33.123793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.049 [2024-12-13 03:40:33.123803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.049 [2024-12-13 03:40:33.123816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.049 [2024-12-13 03:40:33.123826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.049 [2024-12-13 03:40:33.123837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.049 [2024-12-13 03:40:33.123848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.049 [2024-12-13 03:40:33.123860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.049 [2024-12-13 03:40:33.123870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.049 [2024-12-13 03:40:33.123881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.049 [2024-12-13 03:40:33.123891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.049 [2024-12-13 03:40:33.123903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.049 [2024-12-13 03:40:33.123913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.049 [2024-12-13 03:40:33.123931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.049 [2024-12-13 03:40:33.123943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.049 [2024-12-13 03:40:33.123955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.049 [2024-12-13 03:40:33.123965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.049 [2024-12-13 03:40:33.123978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.049 [2024-12-13 03:40:33.123987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.050 [2024-12-13 03:40:33.123999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.050 [2024-12-13 03:40:33.124009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.050 [2024-12-13 03:40:33.124020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.050 [2024-12-13 03:40:33.124030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.050 [2024-12-13 03:40:33.124041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.050 [2024-12-13 03:40:33.124056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.050 [2024-12-13 03:40:33.124067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.050 [2024-12-13 03:40:33.124078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.050 [2024-12-13 03:40:33.124090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.050 [2024-12-13 03:40:33.124099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.050 [2024-12-13 03:40:33.124111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.050 [2024-12-13 03:40:33.124121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.050 [2024-12-13 03:40:33.124133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.050 [2024-12-13 03:40:33.124142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.050 [2024-12-13 03:40:33.124154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.050 [2024-12-13 03:40:33.124165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.050 [2024-12-13 03:40:33.124176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.050 [2024-12-13 03:40:33.124187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.050 [2024-12-13 03:40:33.124198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.050 [2024-12-13 03:40:33.124207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.050 [2024-12-13 03:40:33.124219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.050 [2024-12-13 03:40:33.124229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.050 [2024-12-13 03:40:33.124241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.050 [2024-12-13 03:40:33.124250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.050 [2024-12-13 03:40:33.124263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.050 [2024-12-13 03:40:33.124273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.050 [2024-12-13 03:40:33.124286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.050 [2024-12-13 03:40:33.124296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.050 [2024-12-13 03:40:33.124308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.050 [2024-12-13 03:40:33.124318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.050 [2024-12-13 03:40:33.124330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.050 [2024-12-13 03:40:33.124341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.050 [2024-12-13 03:40:33.124353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.050 [2024-12-13 03:40:33.124363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.050 [2024-12-13 03:40:33.124375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.050 [2024-12-13 03:40:33.124385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-13 03:40:33.124376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.050 with the state(6) to be set 00:29:32.050 [2024-12-13 03:40:33.124400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.050 [2024-12-13 03:40:33.124403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.050 [2024-12-13 03:40:33.124412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.050 [2024-12-13 03:40:33.124413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.050 [2024-12-13 03:40:33.124424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same [2024-12-13 03:40:33.124424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:1with the state(6) to be set 00:29:32.050 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.050 [2024-12-13 03:40:33.124436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same [2024-12-13 03:40:33.124436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:32.050 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.050 [2024-12-13 03:40:33.124448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.050 [2024-12-13 03:40:33.124451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.050 [2024-12-13 03:40:33.124458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.050 [2024-12-13 03:40:33.124461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.050 [2024-12-13 03:40:33.124467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.050 [2024-12-13 03:40:33.124473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.050 [2024-12-13 03:40:33.124477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.050 [2024-12-13 03:40:33.124484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.050 [2024-12-13 03:40:33.124486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.050 [2024-12-13 03:40:33.124496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.050 [2024-12-13 03:40:33.124497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.050 [2024-12-13 03:40:33.124509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.050 [2024-12-13 03:40:33.124511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.050 [2024-12-13 03:40:33.124519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.050 [2024-12-13 03:40:33.124524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.050 [2024-12-13 03:40:33.124529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.050 [2024-12-13 03:40:33.124534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.050 [2024-12-13 03:40:33.124538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.050 [2024-12-13 03:40:33.124545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.050 [2024-12-13 03:40:33.124548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.050 [2024-12-13 03:40:33.124557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-13 03:40:33.124557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.050 with the state(6) to be set 00:29:32.050 [2024-12-13 03:40:33.124569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.050 [2024-12-13 03:40:33.124572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.050 [2024-12-13 03:40:33.124579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.050 [2024-12-13 03:40:33.124582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.050 [2024-12-13 03:40:33.124588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.050 [2024-12-13 03:40:33.124595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.050 [2024-12-13 03:40:33.124599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.050 [2024-12-13 03:40:33.124605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.050 [2024-12-13 03:40:33.124609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.050 [2024-12-13 03:40:33.124618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same [2024-12-13 03:40:33.124618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:1with the state(6) to be set 00:29:32.050 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.050 [2024-12-13 03:40:33.124629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.051 [2024-12-13 03:40:33.124640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.051 [2024-12-13 03:40:33.124651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.051 [2024-12-13 03:40:33.124668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.051 [2024-12-13 03:40:33.124678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-13 03:40:33.124687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.051 with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.051 [2024-12-13 03:40:33.124708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.051 [2024-12-13 03:40:33.124717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.051 [2024-12-13 03:40:33.124732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.051 [2024-12-13 03:40:33.124741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.051 [2024-12-13 03:40:33.124751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-13 03:40:33.124761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.051 with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.051 [2024-12-13 03:40:33.124782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.051 [2024-12-13 03:40:33.124791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.051 [2024-12-13 03:40:33.124808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.051 [2024-12-13 03:40:33.124818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128[2024-12-13 03:40:33.124827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.051 with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.051 [2024-12-13 03:40:33.124847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.051 [2024-12-13 03:40:33.124855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.051 [2024-12-13 03:40:33.124865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.124993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.125001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.127315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.127339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.127348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.127357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.127366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.127376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.127386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.127394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.127403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.127412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.127421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.127429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.127439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.127447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.127456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.127465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.127474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.127483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.127491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.127499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.127508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.127516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.127528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.127536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.051 [2024-12-13 03:40:33.127545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.052 [2024-12-13 03:40:33.127554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.052 [2024-12-13 03:40:33.127563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.052 [2024-12-13 03:40:33.127571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.052 [2024-12-13 03:40:33.127580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.052 [2024-12-13 03:40:33.127588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.052 [2024-12-13 03:40:33.127597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.052 [2024-12-13 03:40:33.127607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.052 [2024-12-13 03:40:33.127615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.052 [2024-12-13 03:40:33.127626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.052 [2024-12-13 03:40:33.127635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.052 [2024-12-13 03:40:33.127644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.052 [2024-12-13 03:40:33.127652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.052 [2024-12-13 03:40:33.127661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.052 [2024-12-13 03:40:33.127670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.052 [2024-12-13 03:40:33.127678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.052 [2024-12-13 03:40:33.127686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.052 [2024-12-13 03:40:33.127697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.052 [2024-12-13 03:40:33.127706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.052 [2024-12-13 03:40:33.127714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.052 [2024-12-13 03:40:33.127722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.052 [2024-12-13 03:40:33.127731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.052 [2024-12-13 03:40:33.127739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:32.052 [2024-12-13 03:40:33.127862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:32.052 [2024-12-13 03:40:33.127952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:32.052 [2024-12-13 03:40:33.128001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000327880 (9): Bad file descriptor 00:29:32.052 [2024-12-13 03:40:33.128020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326e80 (9): Bad file descriptor 00:29:32.052 [2024-12-13 03:40:33.128969] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:32.052 [2024-12-13 03:40:33.129027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032b480 (9): Bad file descriptor 00:29:32.052 [2024-12-13 03:40:33.129077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326480 (9): Bad file descriptor 00:29:32.052 [2024-12-13 03:40:33.129130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.052 [2024-12-13 03:40:33.129146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.052 [2024-12-13 03:40:33.129158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.052 [2024-12-13 03:40:33.129169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.052 [2024-12-13 03:40:33.129179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.052 [2024-12-13 03:40:33.129190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.052 [2024-12-13 03:40:33.129201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.052 [2024-12-13 03:40:33.129211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.052 [2024-12-13 03:40:33.129221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000328c80 is same with the state(6) to be set 00:29:32.052 [2024-12-13 03:40:33.129255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.052 [2024-12-13 03:40:33.129267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.052 [2024-12-13 03:40:33.129278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.052 [2024-12-13 03:40:33.129288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.052 [2024-12-13 03:40:33.129299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.052 [2024-12-13 03:40:33.129309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.052 [2024-12-13 03:40:33.129320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.052 [2024-12-13 03:40:33.129329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.052 [2024-12-13 03:40:33.129339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000328280 is same with the state(6) to be set 00:29:32.052 [2024-12-13 03:40:33.129359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:29:32.052 [2024-12-13 03:40:33.129548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.052 [2024-12-13 03:40:33.129572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.052 [2024-12-13 03:40:33.129594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.052 [2024-12-13 03:40:33.129606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.052 [2024-12-13 03:40:33.129619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.052 [2024-12-13 03:40:33.129629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.052 [2024-12-13 03:40:33.129642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.052 [2024-12-13 03:40:33.129652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.052 [2024-12-13 03:40:33.129665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.052 [2024-12-13 03:40:33.129675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.052 [2024-12-13 03:40:33.129687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.052 [2024-12-13 03:40:33.129696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.052 [2024-12-13 03:40:33.129709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.052 [2024-12-13 03:40:33.129719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.052 [2024-12-13 03:40:33.129730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.052 [2024-12-13 03:40:33.129740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.052 [2024-12-13 03:40:33.129752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.052 [2024-12-13 03:40:33.129762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.052 [2024-12-13 03:40:33.129773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.052 [2024-12-13 03:40:33.129783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.052 [2024-12-13 03:40:33.129794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.052 [2024-12-13 03:40:33.129803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.052 [2024-12-13 03:40:33.129815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.052 [2024-12-13 03:40:33.129825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.052 [2024-12-13 03:40:33.129836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.052 [2024-12-13 03:40:33.129846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.052 [2024-12-13 03:40:33.129858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.052 [2024-12-13 03:40:33.129868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.052 [2024-12-13 03:40:33.129882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.052 [2024-12-13 03:40:33.129892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.052 [2024-12-13 03:40:33.129903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.052 [2024-12-13 03:40:33.129912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.052 [2024-12-13 03:40:33.129932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.052 [2024-12-13 03:40:33.129944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.052 [2024-12-13 03:40:33.129956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.053 [2024-12-13 03:40:33.129966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.053 [2024-12-13 03:40:33.129979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.053 [2024-12-13 03:40:33.129989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.053 [2024-12-13 03:40:33.130002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.053 [2024-12-13 03:40:33.130012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.053 [2024-12-13 03:40:33.130023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d500 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.130420] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:32.053 [2024-12-13 03:40:33.130930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.130960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.130971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.130981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.130990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.130999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:32.053 [2024-12-13 03:40:33.131829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.053 [2024-12-13 03:40:33.131861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326e80 with addr=10.0.0.2, port=4420 00:29:32.053 [2024-12-13 03:40:33.131875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000326e80 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.132068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-12-13 03:40:33.132084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000327880 with addr=10.0.0.2, port=4420 00:29:32.054 [2024-12-13 03:40:33.132095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000327880 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.133343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:32.054 [2024-12-13 03:40:33.133396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326e80 (9): Bad file descriptor 00:29:32.054 [2024-12-13 03:40:33.133415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000327880 (9): Bad file descriptor 00:29:32.054 [2024-12-13 03:40:33.133516] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:32.054 [2024-12-13 03:40:33.133830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.133857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.133867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.133875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.133884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.133893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.133896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.054 [2024-12-13 03:40:33.133901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.133914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.133925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:29:32.054 [2024-12-13 03:40:33.133930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.133938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.133940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.133950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.133951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:32.054 [2024-12-13 03:40:33.133959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.133961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:32.054 [2024-12-13 03:40:33.133970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.133974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:32.054 [2024-12-13 03:40:33.133983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.133987] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:32.054 [2024-12-13 03:40:33.133992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in err[2024-12-13 03:40:33.134002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same or state 00:29:32.054 with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] co[2024-12-13 03:40:33.134013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same ntroller reinitialization failed 00:29:32.054 with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:32.054 [2024-12-13 03:40:33.134024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134036] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:32.054 [2024-12-13 03:40:33.134037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:32.054 [2024-12-13 03:40:33.134674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:29:32.054 [2024-12-13 03:40:33.134844] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:32.054 [2024-12-13 03:40:33.135034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:32.054 [2024-12-13 03:40:33.135051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:32.054 [2024-12-13 03:40:33.135062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:32.055 [2024-12-13 03:40:33.135073] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:32.055 [2024-12-13 03:40:33.135165] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:32.055 [2024-12-13 03:40:33.135608] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:32.055 [2024-12-13 03:40:33.135779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.135802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.135812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.135821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.135830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.135838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.135847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.135855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.135864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.135873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.135882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.135891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.135899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.135908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.135922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.135931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.135940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.135952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.135960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.135969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.135979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.135987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.135995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.136331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:32.055 [2024-12-13 03:40:33.137200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.055 [2024-12-13 03:40:33.137225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.055 [2024-12-13 03:40:33.137245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.055 [2024-12-13 03:40:33.137256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.055 [2024-12-13 03:40:33.137269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.055 [2024-12-13 03:40:33.137280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.055 [2024-12-13 03:40:33.137296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.055 [2024-12-13 03:40:33.137307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.055 [2024-12-13 03:40:33.137319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.055 [2024-12-13 03:40:33.137331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.055 [2024-12-13 03:40:33.137342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.055 [2024-12-13 03:40:33.137353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.055 [2024-12-13 03:40:33.137364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.055 [2024-12-13 03:40:33.137374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.055 [2024-12-13 03:40:33.137386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.055 [2024-12-13 03:40:33.137425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.055 [2024-12-13 03:40:33.137438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.055 [2024-12-13 03:40:33.137448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.137459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.137470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.137481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.137492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.137504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.137514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.137526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.137536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.137548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.137559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.137571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.137580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.137593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.137604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.137617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.137627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.137639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.137649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.137661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.137671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.137682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.137693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.137705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.137715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.137727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.137736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.137748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.137757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.137769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.137779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.137791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.137800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.137811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.137821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.137833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.137843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.137854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.137865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.137878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.137889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.137901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.137911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.137928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.137938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.137951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.137960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.137973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.137984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.137997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.138007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.138019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.138029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.138042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.138051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.138064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.138074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.138085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.138095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.138106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.138116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.138128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.138138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.138150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.138161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.138173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.138183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.138194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.138204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.138215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.138225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.138237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.138247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.138259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.138269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.138281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.138291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.138305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.138316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.138327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.056 [2024-12-13 03:40:33.138337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.056 [2024-12-13 03:40:33.138349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.057 [2024-12-13 03:40:33.138359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.057 [2024-12-13 03:40:33.138371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.057 [2024-12-13 03:40:33.138381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.057 [2024-12-13 03:40:33.138393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.057 [2024-12-13 03:40:33.138404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.057 [2024-12-13 03:40:33.138416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.057 [2024-12-13 03:40:33.138425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.057 [2024-12-13 03:40:33.138439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.057 [2024-12-13 03:40:33.138449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.057 [2024-12-13 03:40:33.138461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.057 [2024-12-13 03:40:33.138471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.057 [2024-12-13 03:40:33.138483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.057 [2024-12-13 03:40:33.138493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.057 [2024-12-13 03:40:33.138505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.057 [2024-12-13 03:40:33.138515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.057 [2024-12-13 03:40:33.138527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.057 [2024-12-13 03:40:33.138538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.057 [2024-12-13 03:40:33.138550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.057 [2024-12-13 03:40:33.138560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.057 [2024-12-13 03:40:33.138571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.057 [2024-12-13 03:40:33.138582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.057 [2024-12-13 03:40:33.138595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.057 [2024-12-13 03:40:33.138604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.057 [2024-12-13 03:40:33.138618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.057 [2024-12-13 03:40:33.138629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.057 [2024-12-13 03:40:33.138641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.057 [2024-12-13 03:40:33.138651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.057 [2024-12-13 03:40:33.138664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.057 [2024-12-13 03:40:33.138674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.057 [2024-12-13 03:40:33.138684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032e900 is same with the state(6) to be set 00:29:32.057 [2024-12-13 03:40:33.139092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.057 [2024-12-13 03:40:33.139110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.057 [2024-12-13 03:40:33.139125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.057 [2024-12-13 03:40:33.139136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.057 [2024-12-13 03:40:33.139147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.057 [2024-12-13 03:40:33.139157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.057 [2024-12-13 03:40:33.139168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.057 [2024-12-13 03:40:33.139178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.057 [2024-12-13 03:40:33.139188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032a080 is same with the state(6) to be set 00:29:32.057 [2024-12-13 03:40:33.139223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.057 [2024-12-13 03:40:33.139235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.057 [2024-12-13 03:40:33.139247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.057 [2024-12-13 03:40:33.139257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.057 [2024-12-13 03:40:33.139268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.057 [2024-12-13 03:40:33.139288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.057 [2024-12-13 03:40:33.139299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.057 [2024-12-13 03:40:33.139309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.057 [2024-12-13 03:40:33.139320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032aa80 is same with the state(6) to be set 00:29:32.057 [2024-12-13 03:40:33.139357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.057 [2024-12-13 03:40:33.139369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.057 [2024-12-13 03:40:33.139380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.057 [2024-12-13 03:40:33.139390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.057 [2024-12-13 03:40:33.139402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.057 [2024-12-13 03:40:33.139412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.057 [2024-12-13 03:40:33.139423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.057 [2024-12-13 03:40:33.139433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.057 [2024-12-13 03:40:33.139442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000329680 is same with the state(6) to be set 00:29:32.057 [2024-12-13 03:40:33.139465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000328c80 (9): Bad file descriptor 00:29:32.057 [2024-12-13 03:40:33.139489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000328280 (9): Bad file descriptor 00:29:32.057 [2024-12-13 03:40:33.140624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.057 [2024-12-13 03:40:33.140647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.057 [2024-12-13 03:40:33.140665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.057 [2024-12-13 03:40:33.140676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.057 [2024-12-13 03:40:33.140688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.057 [2024-12-13 03:40:33.140699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.057 [2024-12-13 03:40:33.140712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.057 [2024-12-13 03:40:33.140723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.057 [2024-12-13 03:40:33.140736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.057 [2024-12-13 03:40:33.140747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.057 [2024-12-13 03:40:33.140759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.057 [2024-12-13 03:40:33.140769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.057 [2024-12-13 03:40:33.140782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.140792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.140805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.140815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.140828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.140838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.140850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.140860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.140872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.140883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.140896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.140906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.140927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.140937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.140949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.140959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.140971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.140982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.140994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.141004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.141017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.141026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.141038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.141048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.141060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.141072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.141087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.141098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.141109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.141119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.141131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.141141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.141153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.141164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.141175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.141186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.141197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.141210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.141222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.141232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.141244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.141254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.141267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.141276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.141288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.141298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.141310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.141320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.141332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.141342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.141354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.141363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.141375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.141386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.141398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.141408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.141420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.141431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.141443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.141453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.141464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.141475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.141490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.141500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.141512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.141523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.141535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.141546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.141558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.141568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.141580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.141590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.141604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.141614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.141626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.141636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.141649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.141660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.141676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.141686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.058 [2024-12-13 03:40:33.141702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.058 [2024-12-13 03:40:33.141713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.141728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.141738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.141754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.141765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.141779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.141794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.141808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.141819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.141831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.141843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.141855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.141865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.141877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.141887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.141900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.141909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.141926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.141936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.141949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.141964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.141978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.141988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.142001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.142012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.142026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.142037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.142050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.145952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.145968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.145980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.145996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.146008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.146020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.146031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.146042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d780 is same with the state(6) to be set 00:29:32.059 [2024-12-13 03:40:33.147408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.147433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.147453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.147465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.147479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.147490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.147504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.147515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.147528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.147539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.147551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.147562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.147576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.147586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.147600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.147610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.147622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.147633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.147646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.147657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.147673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.147685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.147698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.147708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.147721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.147732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.147745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.147756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.147769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.147779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.147792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.147803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.147815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.147825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.147838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.147849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.147862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.147873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.147886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.147897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.147910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.147931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.147944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.147956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.059 [2024-12-13 03:40:33.147968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.059 [2024-12-13 03:40:33.147980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.147994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.060 [2024-12-13 03:40:33.148934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.060 [2024-12-13 03:40:33.148946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.061 [2024-12-13 03:40:33.148956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.061 [2024-12-13 03:40:33.148968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032eb80 is same with the state(6) to be set 00:29:32.061 [2024-12-13 03:40:33.150214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:29:32.061 [2024-12-13 03:40:33.150244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:32.061 [2024-12-13 03:40:33.150259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:32.061 [2024-12-13 03:40:33.150276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:32.061 [2024-12-13 03:40:33.150297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:32.061 [2024-12-13 03:40:33.150370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032aa80 (9): Bad file descriptor 00:29:32.061 [2024-12-13 03:40:33.150414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032a080 (9): Bad file descriptor 00:29:32.061 [2024-12-13 03:40:33.150444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000329680 (9): Bad file descriptor 00:29:32.061 [2024-12-13 03:40:33.151167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-12-13 03:40:33.151201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000327880 with addr=10.0.0.2, port=4420 00:29:32.061 [2024-12-13 03:40:33.151215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000327880 is same with the state(6) to be set 00:29:32.061 [2024-12-13 03:40:33.151395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-12-13 03:40:33.151413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326e80 with addr=10.0.0.2, port=4420 00:29:32.061 [2024-12-13 03:40:33.151425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000326e80 is same with the state(6) to be set 00:29:32.061 [2024-12-13 03:40:33.151537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-12-13 03:40:33.151553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:29:32.061 [2024-12-13 03:40:33.151564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000326480 is same with the state(6) to be set 00:29:32.061 [2024-12-13 03:40:33.151653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.061 [2024-12-13 03:40:33.151669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032b480 with addr=10.0.0.2, port=4420 00:29:32.061 [2024-12-13 03:40:33.151684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032b480 is same with the state(6) to be set 00:29:32.061 [2024-12-13 03:40:33.152166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.061 [2024-12-13 03:40:33.152190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.061 [2024-12-13 03:40:33.152209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.061 [2024-12-13 03:40:33.152221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.061 [2024-12-13 03:40:33.152236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.061 [2024-12-13 03:40:33.152247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.061 [2024-12-13 03:40:33.152262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.061 [2024-12-13 03:40:33.152273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.061 [2024-12-13 03:40:33.152287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.061 [2024-12-13 03:40:33.152298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.061 [2024-12-13 03:40:33.152313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.061 [2024-12-13 03:40:33.152325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.061 [2024-12-13 03:40:33.152338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.061 [2024-12-13 03:40:33.152349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.061 [2024-12-13 03:40:33.152363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.061 [2024-12-13 03:40:33.152375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.061 [2024-12-13 03:40:33.152388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.061 [2024-12-13 03:40:33.152399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.061 [2024-12-13 03:40:33.152411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.061 [2024-12-13 03:40:33.152422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.061 [2024-12-13 03:40:33.152435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.061 [2024-12-13 03:40:33.152446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.061 [2024-12-13 03:40:33.152459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.061 [2024-12-13 03:40:33.152470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.061 [2024-12-13 03:40:33.152486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.061 [2024-12-13 03:40:33.152497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.061 [2024-12-13 03:40:33.152511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.061 [2024-12-13 03:40:33.152521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.061 [2024-12-13 03:40:33.152535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.061 [2024-12-13 03:40:33.152545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.061 [2024-12-13 03:40:33.152558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.061 [2024-12-13 03:40:33.152569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.061 [2024-12-13 03:40:33.152581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.061 [2024-12-13 03:40:33.152592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.061 [2024-12-13 03:40:33.152605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.061 [2024-12-13 03:40:33.152615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.061 [2024-12-13 03:40:33.152629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.061 [2024-12-13 03:40:33.152638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.061 [2024-12-13 03:40:33.152652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.061 [2024-12-13 03:40:33.152663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.061 [2024-12-13 03:40:33.152675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.061 [2024-12-13 03:40:33.152685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.061 [2024-12-13 03:40:33.152697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.061 [2024-12-13 03:40:33.152707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.061 [2024-12-13 03:40:33.152720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.061 [2024-12-13 03:40:33.152731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.061 [2024-12-13 03:40:33.152743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.061 [2024-12-13 03:40:33.152753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.061 [2024-12-13 03:40:33.152766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.061 [2024-12-13 03:40:33.152778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.061 [2024-12-13 03:40:33.152792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.061 [2024-12-13 03:40:33.152802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.061 [2024-12-13 03:40:33.152815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.061 [2024-12-13 03:40:33.152826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.061 [2024-12-13 03:40:33.152839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.061 [2024-12-13 03:40:33.152850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.061 [2024-12-13 03:40:33.152864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.061 [2024-12-13 03:40:33.152874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.061 [2024-12-13 03:40:33.152888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.061 [2024-12-13 03:40:33.152899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.061 [2024-12-13 03:40:33.152911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.061 [2024-12-13 03:40:33.152928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.062 [2024-12-13 03:40:33.152940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.062 [2024-12-13 03:40:33.152951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.062 [2024-12-13 03:40:33.152964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.062 [2024-12-13 03:40:33.152975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.062 [2024-12-13 03:40:33.152988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.062 [2024-12-13 03:40:33.152999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.062 [2024-12-13 03:40:33.153010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.062 [2024-12-13 03:40:33.153021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.062 [2024-12-13 03:40:33.153034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.062 [2024-12-13 03:40:33.153045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.062 [2024-12-13 03:40:33.153058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.062 [2024-12-13 03:40:33.153069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.062 [2024-12-13 03:40:33.153083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.062 [2024-12-13 03:40:33.153095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.062 [2024-12-13 03:40:33.153108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.062 [2024-12-13 03:40:33.153120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.062 [2024-12-13 03:40:33.153134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.062 [2024-12-13 03:40:33.153144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.062 [2024-12-13 03:40:33.153157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.062 [2024-12-13 03:40:33.153168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.062 [2024-12-13 03:40:33.153181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.062 [2024-12-13 03:40:33.153191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.062 [2024-12-13 03:40:33.153204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.062 [2024-12-13 03:40:33.153215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.062 [2024-12-13 03:40:33.153229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.062 [2024-12-13 03:40:33.153239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.062 [2024-12-13 03:40:33.153260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.062 [2024-12-13 03:40:33.153270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.062 [2024-12-13 03:40:33.153283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.062 [2024-12-13 03:40:33.153306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.062 [2024-12-13 03:40:33.153320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.062 [2024-12-13 03:40:33.153331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.062 [2024-12-13 03:40:33.153344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.062 [2024-12-13 03:40:33.153355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.062 [2024-12-13 03:40:33.153368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.062 [2024-12-13 03:40:33.153379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.062 [2024-12-13 03:40:33.153392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.062 [2024-12-13 03:40:33.153406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.062 [2024-12-13 03:40:33.153419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.062 [2024-12-13 03:40:33.153430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.062 [2024-12-13 03:40:33.153443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.062 [2024-12-13 03:40:33.153454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.062 [2024-12-13 03:40:33.153466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.062 [2024-12-13 03:40:33.153477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.062 [2024-12-13 03:40:33.153490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.062 [2024-12-13 03:40:33.153502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.062 [2024-12-13 03:40:33.153513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.062 [2024-12-13 03:40:33.153524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.062 [2024-12-13 03:40:33.153536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.062 [2024-12-13 03:40:33.153546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.062 [2024-12-13 03:40:33.153558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.062 [2024-12-13 03:40:33.153568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.062 [2024-12-13 03:40:33.153580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.062 [2024-12-13 03:40:33.153591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.062 [2024-12-13 03:40:33.153603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.062 [2024-12-13 03:40:33.153613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.062 [2024-12-13 03:40:33.153625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.062 [2024-12-13 03:40:33.153635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.062 [2024-12-13 03:40:33.153647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.062 [2024-12-13 03:40:33.153657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.062 [2024-12-13 03:40:33.153670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.062 [2024-12-13 03:40:33.153680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.062 [2024-12-13 03:40:33.153691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.062 [2024-12-13 03:40:33.153704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.062 [2024-12-13 03:40:33.153715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.062 [2024-12-13 03:40:33.153726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.062 [2024-12-13 03:40:33.153737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032df00 is same with the state(6) to be set 00:29:32.062 [2024-12-13 03:40:33.155049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.062 [2024-12-13 03:40:33.155070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.062 [2024-12-13 03:40:33.155087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.155099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.155111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.155122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.155137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.155148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.155161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.155172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.155185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.155196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.155210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.155222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.155235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.155246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.155259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.155271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.155287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.155300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.155317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.155333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.155350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.155363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.155377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.155388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.155401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.155413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.155425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.155436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.155448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.155459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.155472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.155483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.155496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.155508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.155520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.155531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.155544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.155557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.155571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.155582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.155596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.155608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.155624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.155637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.155653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.155663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.155675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.155687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.155700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.155712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.155724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.155735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.155748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.155761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.155774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.155786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.155798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.155810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.155822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.155833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.155845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.155857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.155872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.155884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.155896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.155907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.155926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.155938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.155950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.155963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.155976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.155988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.156003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.156014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.156026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.156037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.156050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.156061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.156074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.156085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.063 [2024-12-13 03:40:33.156098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.063 [2024-12-13 03:40:33.156111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.064 [2024-12-13 03:40:33.156124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.064 [2024-12-13 03:40:33.156135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.064 [2024-12-13 03:40:33.156147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.064 [2024-12-13 03:40:33.156159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.064 [2024-12-13 03:40:33.156173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.064 [2024-12-13 03:40:33.156184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.064 [2024-12-13 03:40:33.156207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.064 [2024-12-13 03:40:33.156218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.064 [2024-12-13 03:40:33.156232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.064 [2024-12-13 03:40:33.156243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.064 [2024-12-13 03:40:33.156257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.064 [2024-12-13 03:40:33.156268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.064 [2024-12-13 03:40:33.156281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032e180 is same with the state(6) to be set 00:29:32.064 [2024-12-13 03:40:33.157984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:32.064 [2024-12-13 03:40:33.158017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:32.064 [2024-12-13 03:40:33.158034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:32.064 [2024-12-13 03:40:33.158291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-12-13 03:40:33.158312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032aa80 with addr=10.0.0.2, port=4420 00:29:32.064 [2024-12-13 03:40:33.158327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032aa80 is same with the state(6) to be set 00:29:32.064 [2024-12-13 03:40:33.158342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000327880 (9): Bad file descriptor 00:29:32.064 [2024-12-13 03:40:33.158357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326e80 (9): Bad file descriptor 00:29:32.064 [2024-12-13 03:40:33.158371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326480 (9): Bad file descriptor 00:29:32.064 [2024-12-13 03:40:33.158385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032b480 (9): Bad file descriptor 00:29:32.064 [2024-12-13 03:40:33.158784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-12-13 03:40:33.158805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:29:32.064 [2024-12-13 03:40:33.158817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:29:32.064 [2024-12-13 03:40:33.158970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-12-13 03:40:33.158987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000328280 with addr=10.0.0.2, port=4420 00:29:32.064 [2024-12-13 03:40:33.158999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000328280 is same with the state(6) to be set 00:29:32.064 [2024-12-13 03:40:33.159103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.064 [2024-12-13 03:40:33.159118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000328c80 with addr=10.0.0.2, port=4420 00:29:32.064 [2024-12-13 03:40:33.159130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000328c80 is same with the state(6) to be set 00:29:32.064 [2024-12-13 03:40:33.159142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032aa80 (9): Bad file descriptor 00:29:32.064 [2024-12-13 03:40:33.159156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:32.064 [2024-12-13 03:40:33.159166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:32.064 [2024-12-13 03:40:33.159178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:32.064 [2024-12-13 03:40:33.159192] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:32.064 [2024-12-13 03:40:33.159204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:32.064 [2024-12-13 03:40:33.159213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:32.064 [2024-12-13 03:40:33.159223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:32.064 [2024-12-13 03:40:33.159236] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:32.064 [2024-12-13 03:40:33.159246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:32.064 [2024-12-13 03:40:33.159255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:32.064 [2024-12-13 03:40:33.159264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:32.064 [2024-12-13 03:40:33.159274] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:32.064 [2024-12-13 03:40:33.159284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:32.064 [2024-12-13 03:40:33.159294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:32.064 [2024-12-13 03:40:33.159303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:32.064 [2024-12-13 03:40:33.159312] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:32.064 [2024-12-13 03:40:33.160128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:29:32.064 [2024-12-13 03:40:33.160154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000328280 (9): Bad file descriptor 00:29:32.064 [2024-12-13 03:40:33.160168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000328c80 (9): Bad file descriptor 00:29:32.064 [2024-12-13 03:40:33.160180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:29:32.064 [2024-12-13 03:40:33.160190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:29:32.064 [2024-12-13 03:40:33.160200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:29:32.064 [2024-12-13 03:40:33.160210] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:29:32.064 [2024-12-13 03:40:33.160442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:32.064 [2024-12-13 03:40:33.160457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:32.064 [2024-12-13 03:40:33.160467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:32.064 [2024-12-13 03:40:33.160477] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:32.064 [2024-12-13 03:40:33.160488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:32.064 [2024-12-13 03:40:33.160498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:32.064 [2024-12-13 03:40:33.160507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:32.064 [2024-12-13 03:40:33.160516] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:32.064 [2024-12-13 03:40:33.160525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:32.064 [2024-12-13 03:40:33.160534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:32.064 [2024-12-13 03:40:33.160543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:32.064 [2024-12-13 03:40:33.160552] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:32.064 [2024-12-13 03:40:33.160698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.064 [2024-12-13 03:40:33.160718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.064 [2024-12-13 03:40:33.160736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.064 [2024-12-13 03:40:33.160748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.064 [2024-12-13 03:40:33.160762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.064 [2024-12-13 03:40:33.160773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.064 [2024-12-13 03:40:33.160786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.064 [2024-12-13 03:40:33.160797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.064 [2024-12-13 03:40:33.160810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.064 [2024-12-13 03:40:33.160820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.064 [2024-12-13 03:40:33.160833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.064 [2024-12-13 03:40:33.160843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.064 [2024-12-13 03:40:33.160855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.064 [2024-12-13 03:40:33.160866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.064 [2024-12-13 03:40:33.160878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.064 [2024-12-13 03:40:33.160889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.064 [2024-12-13 03:40:33.160901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.064 [2024-12-13 03:40:33.160912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.064 [2024-12-13 03:40:33.160931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.160942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.160955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.160965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.160977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.160989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.161002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.161012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.161027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.161038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.161050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.161060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.161073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.161084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.161096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.161107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.161119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.161130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.161142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.161153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.161166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.161176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.161189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.161200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.161212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.161223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.161236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.161246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.161260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.161271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.161283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.161294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.161306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.161318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.161331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.161342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.161354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.161365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.161377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.161387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.161400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.161411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.161423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.161435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.161447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.161457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.161470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.161481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.161499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.161510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.161522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.161533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.161546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.161556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.161569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.161579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.161591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.161601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.161615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.161626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.161637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.161648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.161661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.161671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.161684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.161695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.161707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.161719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.161731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.161741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.161753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.161764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.161776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.161786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.161799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.161810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.161822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.161833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.065 [2024-12-13 03:40:33.161845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-12-13 03:40:33.161855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.066 [2024-12-13 03:40:33.161867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-12-13 03:40:33.161879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.066 [2024-12-13 03:40:33.161893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-12-13 03:40:33.161905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.066 [2024-12-13 03:40:33.161922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-12-13 03:40:33.161933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.066 [2024-12-13 03:40:33.161946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-12-13 03:40:33.161958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.066 [2024-12-13 03:40:33.161971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-12-13 03:40:33.161983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.066 [2024-12-13 03:40:33.161998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-12-13 03:40:33.162010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.066 [2024-12-13 03:40:33.162023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-12-13 03:40:33.162033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.066 [2024-12-13 03:40:33.162045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-12-13 03:40:33.162055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.066 [2024-12-13 03:40:33.162067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-12-13 03:40:33.162078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.066 [2024-12-13 03:40:33.162090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-12-13 03:40:33.162100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.066 [2024-12-13 03:40:33.162112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-12-13 03:40:33.162123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.066 [2024-12-13 03:40:33.162135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-12-13 03:40:33.162145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.066 [2024-12-13 03:40:33.162156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-12-13 03:40:33.162166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.066 [2024-12-13 03:40:33.162178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-12-13 03:40:33.162189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.066 [2024-12-13 03:40:33.162205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-12-13 03:40:33.162215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.066 [2024-12-13 03:40:33.162225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032e400 is same with the state(6) to be set 00:29:32.066 [2024-12-13 03:40:33.163517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-12-13 03:40:33.163539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.066 [2024-12-13 03:40:33.163561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-12-13 03:40:33.163573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.066 [2024-12-13 03:40:33.163587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-12-13 03:40:33.163598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.066 [2024-12-13 03:40:33.163611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-12-13 03:40:33.163622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.066 [2024-12-13 03:40:33.163635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-12-13 03:40:33.163646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.066 [2024-12-13 03:40:33.163659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-12-13 03:40:33.163670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.066 [2024-12-13 03:40:33.163684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-12-13 03:40:33.163694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.066 [2024-12-13 03:40:33.163709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-12-13 03:40:33.163720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.066 [2024-12-13 03:40:33.163732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-12-13 03:40:33.163743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.066 [2024-12-13 03:40:33.163756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-12-13 03:40:33.163767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.066 [2024-12-13 03:40:33.163779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-12-13 03:40:33.163791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.066 [2024-12-13 03:40:33.163807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-12-13 03:40:33.163818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.066 [2024-12-13 03:40:33.163832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-12-13 03:40:33.163846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.066 [2024-12-13 03:40:33.163863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-12-13 03:40:33.163875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.066 [2024-12-13 03:40:33.163891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-12-13 03:40:33.163902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.066 [2024-12-13 03:40:33.163915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-12-13 03:40:33.163933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.066 [2024-12-13 03:40:33.163945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-12-13 03:40:33.163956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.066 [2024-12-13 03:40:33.163968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-12-13 03:40:33.163979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.066 [2024-12-13 03:40:33.163992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-12-13 03:40:33.164003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.066 [2024-12-13 03:40:33.164015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-12-13 03:40:33.164026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.066 [2024-12-13 03:40:33.164039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.164981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.067 [2024-12-13 03:40:33.164994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-12-13 03:40:33.165005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.068 [2024-12-13 03:40:33.165017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.068 [2024-12-13 03:40:33.165029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.068 [2024-12-13 03:40:33.165041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.068 [2024-12-13 03:40:33.165050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.068 [2024-12-13 03:40:33.165062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.068 [2024-12-13 03:40:33.165072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.068 [2024-12-13 03:40:33.165082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032e680 is same with the state(6) to be set 00:29:32.068 [2024-12-13 03:40:33.166404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:32.068 [2024-12-13 03:40:33.166425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:32.068 [2024-12-13 03:40:33.166437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:32.068 [2024-12-13 03:40:33.166449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:32.068 [2024-12-13 03:40:33.166465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:29:32.068 task offset: 28928 on job bdev=Nvme3n1 fails 00:29:32.068 00:29:32.068 Latency(us) 00:29:32.068 [2024-12-13T02:40:33.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.068 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.068 Job: Nvme1n1 ended in about 0.81 seconds with error 00:29:32.068 Verification LBA range: start 0x0 length 0x400 00:29:32.068 Nvme1n1 : 0.81 233.67 14.60 24.60 0.00 244086.97 18100.42 242670.45 00:29:32.068 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.068 Job: Nvme2n1 ended in about 0.83 seconds with error 00:29:32.068 Verification LBA range: start 0x0 length 0x400 00:29:32.068 Nvme2n1 : 0.83 154.74 9.67 77.37 0.00 266895.69 20347.37 242670.45 00:29:32.068 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.068 Job: Nvme3n1 ended in about 0.81 seconds with error 00:29:32.068 Verification LBA range: start 0x0 length 0x400 00:29:32.068 Nvme3n1 : 0.81 237.95 14.87 79.32 0.00 190836.30 5960.66 234681.30 00:29:32.068 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.068 Job: Nvme4n1 ended in about 0.81 seconds with error 00:29:32.068 Verification LBA range: start 0x0 length 0x400 00:29:32.068 Nvme4n1 : 0.81 237.69 14.86 79.23 0.00 186836.36 5867.03 240673.16 00:29:32.068 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.068 Job: Nvme5n1 ended in about 0.83 seconds with error 00:29:32.068 Verification LBA range: start 0x0 length 0x400 00:29:32.068 Nvme5n1 : 0.83 153.31 9.58 76.66 0.00 252571.39 20472.20 240673.16 00:29:32.068 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.068 Job: Nvme6n1 ended in about 0.84 seconds with error 00:29:32.068 Verification LBA range: start 0x0 length 0x400 00:29:32.068 Nvme6n1 : 0.84 171.97 10.75 57.32 0.00 245753.17 17101.78 233682.65 00:29:32.068 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.068 Job: Nvme7n1 ended in about 0.84 seconds with error 00:29:32.068 Verification LBA range: start 0x0 length 0x400 00:29:32.068 Nvme7n1 : 0.84 151.78 9.49 75.89 0.00 244229.85 29335.16 238675.87 00:29:32.068 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.068 Job: Nvme8n1 ended in about 0.85 seconds with error 00:29:32.068 Verification LBA range: start 0x0 length 0x400 00:29:32.068 Nvme8n1 : 0.85 151.27 9.45 75.63 0.00 239507.42 15229.32 245666.38 00:29:32.068 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.068 Job: Nvme9n1 ended in about 0.82 seconds with error 00:29:32.068 Verification LBA range: start 0x0 length 0x400 00:29:32.068 Nvme9n1 : 0.82 160.83 10.05 77.98 0.00 221004.72 32955.25 248662.31 00:29:32.068 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.068 Job: Nvme10n1 ended in about 0.83 seconds with error 00:29:32.068 Verification LBA range: start 0x0 length 0x400 00:29:32.068 Nvme10n1 : 0.83 157.81 9.86 77.10 0.00 219790.62 18599.74 265639.25 00:29:32.068 [2024-12-13T02:40:33.277Z] =================================================================================================================== 00:29:32.068 [2024-12-13T02:40:33.277Z] Total : 1811.02 113.19 701.09 0.00 228614.76 5867.03 265639.25 00:29:32.328 [2024-12-13 03:40:33.300031] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:32.328 [2024-12-13 03:40:33.300093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:29:32.328 [2024-12-13 03:40:33.300554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-12-13 03:40:33.300584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032b480 with addr=10.0.0.2, port=4420 00:29:32.328 [2024-12-13 03:40:33.300600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032b480 is same with the state(6) to be set 00:29:32.328 [2024-12-13 03:40:33.300834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-12-13 03:40:33.300851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:29:32.328 [2024-12-13 03:40:33.300862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000326480 is same with the state(6) to be set 00:29:32.328 [2024-12-13 03:40:33.301066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-12-13 03:40:33.301083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326e80 with addr=10.0.0.2, port=4420 00:29:32.328 [2024-12-13 03:40:33.301094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000326e80 is same with the state(6) to be set 00:29:32.328 [2024-12-13 03:40:33.301327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-12-13 03:40:33.301343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000327880 with addr=10.0.0.2, port=4420 00:29:32.328 [2024-12-13 03:40:33.301353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000327880 is same with the state(6) to be set 00:29:32.328 [2024-12-13 03:40:33.301498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-12-13 03:40:33.301512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000329680 with addr=10.0.0.2, port=4420 00:29:32.328 [2024-12-13 03:40:33.301523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000329680 is same with the state(6) to be set 00:29:32.328 [2024-12-13 03:40:33.301629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-12-13 03:40:33.301643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032a080 with addr=10.0.0.2, port=4420 00:29:32.328 [2024-12-13 03:40:33.301653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032a080 is same with the state(6) to be set 00:29:32.328 [2024-12-13 03:40:33.301705] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:29:32.328 [2024-12-13 03:40:33.302552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:29:32.328 [2024-12-13 03:40:33.302639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032b480 (9): Bad file descriptor 00:29:32.328 [2024-12-13 03:40:33.302661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326480 (9): Bad file descriptor 00:29:32.328 [2024-12-13 03:40:33.302675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326e80 (9): Bad file descriptor 00:29:32.328 [2024-12-13 03:40:33.302687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000327880 (9): Bad file descriptor 00:29:32.328 [2024-12-13 03:40:33.302699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000329680 (9): Bad file descriptor 00:29:32.328 [2024-12-13 03:40:33.302712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032a080 (9): Bad file descriptor 00:29:32.328 [2024-12-13 03:40:33.302926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:32.328 [2024-12-13 03:40:33.302943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:32.328 [2024-12-13 03:40:33.302959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:32.328 [2024-12-13 03:40:33.303209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-12-13 03:40:33.303228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032aa80 with addr=10.0.0.2, port=4420 00:29:32.328 [2024-12-13 03:40:33.303240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032aa80 is same with the state(6) to be set 00:29:32.328 [2024-12-13 03:40:33.303251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:32.328 [2024-12-13 03:40:33.303261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:32.328 [2024-12-13 03:40:33.303272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:32.328 [2024-12-13 03:40:33.303284] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:32.328 [2024-12-13 03:40:33.303296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:32.328 [2024-12-13 03:40:33.303305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:32.328 [2024-12-13 03:40:33.303314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:32.328 [2024-12-13 03:40:33.303323] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:32.328 [2024-12-13 03:40:33.303331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:32.328 [2024-12-13 03:40:33.303340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:32.328 [2024-12-13 03:40:33.303349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:32.328 [2024-12-13 03:40:33.303358] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:32.328 [2024-12-13 03:40:33.303367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:32.328 [2024-12-13 03:40:33.303376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:32.328 [2024-12-13 03:40:33.303384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:32.328 [2024-12-13 03:40:33.303393] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:32.328 [2024-12-13 03:40:33.303403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:29:32.328 [2024-12-13 03:40:33.303415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:29:32.328 [2024-12-13 03:40:33.303423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:29:32.328 [2024-12-13 03:40:33.303432] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:29:32.328 [2024-12-13 03:40:33.303441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:29:32.328 [2024-12-13 03:40:33.303450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:29:32.328 [2024-12-13 03:40:33.303459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:32.328 [2024-12-13 03:40:33.303468] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:29:32.328 [2024-12-13 03:40:33.303698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-12-13 03:40:33.303722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000328c80 with addr=10.0.0.2, port=4420 00:29:32.328 [2024-12-13 03:40:33.303733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000328c80 is same with the state(6) to be set 00:29:32.328 [2024-12-13 03:40:33.303876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-12-13 03:40:33.303890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000328280 with addr=10.0.0.2, port=4420 00:29:32.328 [2024-12-13 03:40:33.303901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000328280 is same with the state(6) to be set 00:29:32.328 [2024-12-13 03:40:33.304049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-12-13 03:40:33.304063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:29:32.328 [2024-12-13 03:40:33.304073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:29:32.328 [2024-12-13 03:40:33.304087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032aa80 (9): Bad file descriptor 00:29:32.328 [2024-12-13 03:40:33.304132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000328c80 (9): Bad file descriptor 00:29:32.328 [2024-12-13 03:40:33.304146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000328280 (9): Bad file descriptor 00:29:32.329 [2024-12-13 03:40:33.304159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:29:32.329 [2024-12-13 03:40:33.304170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:29:32.329 [2024-12-13 03:40:33.304180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:29:32.329 [2024-12-13 03:40:33.304189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:29:32.329 [2024-12-13 03:40:33.304199] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:29:32.329 [2024-12-13 03:40:33.304238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:32.329 [2024-12-13 03:40:33.304250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:32.329 [2024-12-13 03:40:33.304259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:32.329 [2024-12-13 03:40:33.304267] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:32.329 [2024-12-13 03:40:33.304277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:32.329 [2024-12-13 03:40:33.304289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:32.329 [2024-12-13 03:40:33.304298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:32.329 [2024-12-13 03:40:33.304307] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:32.329 [2024-12-13 03:40:33.304316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:32.329 [2024-12-13 03:40:33.304325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:32.329 [2024-12-13 03:40:33.304335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:32.329 [2024-12-13 03:40:33.304346] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:35.617 03:40:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:29:36.186 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2795311 00:29:36.186 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:29:36.186 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2795311 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2795311 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:36.187 rmmod nvme_tcp 00:29:36.187 rmmod nvme_fabrics 00:29:36.187 rmmod nvme_keyring 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2794969 ']' 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2794969 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2794969 ']' 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2794969 00:29:36.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2794969) - No such process 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2794969 is not found' 00:29:36.187 Process with pid 2794969 is not found 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:36.187 03:40:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:38.723 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:38.723 00:29:38.723 real 0m11.416s 00:29:38.723 user 0m33.088s 00:29:38.723 sys 0m1.596s 00:29:38.723 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:38.723 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:38.723 ************************************ 00:29:38.723 END TEST nvmf_shutdown_tc3 00:29:38.723 ************************************ 00:29:38.723 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:29:38.723 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:29:38.723 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:38.724 ************************************ 00:29:38.724 START TEST nvmf_shutdown_tc4 00:29:38.724 ************************************ 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:38.724 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:38.724 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:38.724 Found net devices under 0000:af:00.0: cvl_0_0 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:38.724 Found net devices under 0000:af:00.1: cvl_0_1 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:38.724 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:38.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:38.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:29:38.725 00:29:38.725 --- 10.0.0.2 ping statistics --- 00:29:38.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:38.725 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:38.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:38.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:29:38.725 00:29:38.725 --- 10.0.0.1 ping statistics --- 00:29:38.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:38.725 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2796963 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2796963 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2796963 ']' 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:38.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:38.725 03:40:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:38.725 [2024-12-13 03:40:39.756052] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:38.725 [2024-12-13 03:40:39.756143] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:38.725 [2024-12-13 03:40:39.874782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:38.983 [2024-12-13 03:40:39.985194] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:38.983 [2024-12-13 03:40:39.985238] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:38.983 [2024-12-13 03:40:39.985249] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:38.983 [2024-12-13 03:40:39.985259] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:38.983 [2024-12-13 03:40:39.985266] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:38.983 [2024-12-13 03:40:39.987600] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:38.983 [2024-12-13 03:40:39.987673] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:38.983 [2024-12-13 03:40:39.987752] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:38.984 [2024-12-13 03:40:39.987774] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:29:39.552 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:39.552 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:29:39.552 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:39.552 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:39.552 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:39.552 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:39.552 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:39.552 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.552 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:39.552 [2024-12-13 03:40:40.607883] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:39.552 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.552 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:39.552 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:39.552 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:39.552 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:39.552 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:39.552 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.552 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:39.552 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.552 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:39.552 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.552 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:39.552 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.552 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:39.552 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.552 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:39.552 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.552 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:39.552 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.552 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:39.552 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.552 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:39.553 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.553 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:39.553 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:39.553 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:39.553 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:39.553 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.553 03:40:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:39.553 Malloc1 00:29:39.812 [2024-12-13 03:40:40.769444] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:39.812 Malloc2 00:29:39.812 Malloc3 00:29:40.071 Malloc4 00:29:40.071 Malloc5 00:29:40.071 Malloc6 00:29:40.330 Malloc7 00:29:40.330 Malloc8 00:29:40.589 Malloc9 00:29:40.589 Malloc10 00:29:40.589 03:40:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.589 03:40:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:40.589 03:40:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:40.589 03:40:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:40.589 03:40:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2797268 00:29:40.589 03:40:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:29:40.589 03:40:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:29:40.848 [2024-12-13 03:40:41.837977] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:46.126 03:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:46.126 03:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2796963 00:29:46.126 03:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2796963 ']' 00:29:46.126 03:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2796963 00:29:46.126 03:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:29:46.126 03:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:46.126 03:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2796963 00:29:46.126 03:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:46.126 03:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:46.126 03:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2796963' 00:29:46.126 killing process with pid 2796963 00:29:46.126 03:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2796963 00:29:46.126 03:40:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2796963 00:29:46.126 [2024-12-13 03:40:46.825996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005480 is same with the state(6) to be set 00:29:46.126 [2024-12-13 03:40:46.826058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005480 is same with the state(6) to be set 00:29:46.126 [2024-12-13 03:40:46.826069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005480 is same with the state(6) to be set 00:29:46.126 [2024-12-13 03:40:46.826080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005480 is same with the state(6) to be set 00:29:46.126 [2024-12-13 03:40:46.826089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005480 is same with the state(6) to be set 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 [2024-12-13 03:40:46.827332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 [2024-12-13 03:40:46.827367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 [2024-12-13 03:40:46.827379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:46.126 [2024-12-13 03:40:46.827390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:46.126 [2024-12-13 03:40:46.827399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:46.126 [2024-12-13 03:40:46.827408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:46.126 [2024-12-13 03:40:46.827417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 [2024-12-13 03:40:46.827507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 [2024-12-13 03:40:46.828669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005c80 is same with the state(6) to be set 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 [2024-12-13 03:40:46.828702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005c80 is same with the state(6) to be set 00:29:46.126 [2024-12-13 03:40:46.828715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005c80 is same with the state(6) to be set 00:29:46.126 [2024-12-13 03:40:46.828730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005c80 is same with the state(6) to be set 00:29:46.126 [2024-12-13 03:40:46.828739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005c80 is same with the state(6) to be set 00:29:46.126 [2024-12-13 03:40:46.828748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005c80 is same with the state(6) to be set 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 Write completed with error (sct=0, sc=8) 00:29:46.126 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 [2024-12-13 03:40:46.829489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 [2024-12-13 03:40:46.831827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:46.127 [2024-12-13 03:40:46.831854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:46.127 [2024-12-13 03:40:46.831864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:46.127 [2024-12-13 03:40:46.831873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:46.127 [2024-12-13 03:40:46.831882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:46.127 [2024-12-13 03:40:46.831890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:46.127 [2024-12-13 03:40:46.831898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 [2024-12-13 03:40:46.833593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same Write completed with error (sct=0, sc=8) 00:29:46.127 with the state(6) to be set 00:29:46.127 starting I/O failed: -6 00:29:46.127 [2024-12-13 03:40:46.833621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:46.127 [2024-12-13 03:40:46.833631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:46.127 [2024-12-13 03:40:46.833640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:46.127 [2024-12-13 03:40:46.833649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:46.127 [2024-12-13 03:40:46.833662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 [2024-12-13 03:40:46.833672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:46.127 starting I/O failed: -6 00:29:46.127 [2024-12-13 03:40:46.833680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:46.127 [2024-12-13 03:40:46.833689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 Write completed with error (sct=0, sc=8) 00:29:46.127 starting I/O failed: -6 00:29:46.127 [2024-12-13 03:40:46.834696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:46.128 [2024-12-13 03:40:46.834724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:46.128 [2024-12-13 03:40:46.834735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 [2024-12-13 03:40:46.834744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:46.128 starting I/O failed: -6 00:29:46.128 [2024-12-13 03:40:46.834754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 [2024-12-13 03:40:46.842794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.128 NVMe io qpair process completion error 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 [2024-12-13 03:40:46.844468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 [2024-12-13 03:40:46.846125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.128 Write completed with error (sct=0, sc=8) 00:29:46.128 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 [2024-12-13 03:40:46.848828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 [2024-12-13 03:40:46.859588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.129 NVMe io qpair process completion error 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 [2024-12-13 03:40:46.861130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 Write completed with error (sct=0, sc=8) 00:29:46.129 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 [2024-12-13 03:40:46.862961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 [2024-12-13 03:40:46.865439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.130 Write completed with error (sct=0, sc=8) 00:29:46.130 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 [2024-12-13 03:40:46.880669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.131 NVMe io qpair process completion error 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 [2024-12-13 03:40:46.882194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 [2024-12-13 03:40:46.884172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.131 Write completed with error (sct=0, sc=8) 00:29:46.131 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 [2024-12-13 03:40:46.886590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 [2024-12-13 03:40:46.901181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.132 NVMe io qpair process completion error 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 [2024-12-13 03:40:46.902712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.132 starting I/O failed: -6 00:29:46.132 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 [2024-12-13 03:40:46.904642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 [2024-12-13 03:40:46.907225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.133 starting I/O failed: -6 00:29:46.133 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 [2024-12-13 03:40:46.921536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.134 NVMe io qpair process completion error 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 [2024-12-13 03:40:46.922887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.134 starting I/O failed: -6 00:29:46.134 starting I/O failed: -6 00:29:46.134 starting I/O failed: -6 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 [2024-12-13 03:40:46.924836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 Write completed with error (sct=0, sc=8) 00:29:46.134 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 [2024-12-13 03:40:46.927405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 [2024-12-13 03:40:46.944940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.135 NVMe io qpair process completion error 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 [2024-12-13 03:40:46.946556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.135 starting I/O failed: -6 00:29:46.135 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 [2024-12-13 03:40:46.948576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 [2024-12-13 03:40:46.951002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.136 Write completed with error (sct=0, sc=8) 00:29:46.136 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 [2024-12-13 03:40:46.965172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.137 NVMe io qpair process completion error 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 [2024-12-13 03:40:46.966741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 [2024-12-13 03:40:46.968413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.137 starting I/O failed: -6 00:29:46.137 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 [2024-12-13 03:40:46.970892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 [2024-12-13 03:40:46.981632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.138 NVMe io qpair process completion error 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 [2024-12-13 03:40:46.983211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.138 starting I/O failed: -6 00:29:46.138 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 [2024-12-13 03:40:46.984836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 [2024-12-13 03:40:46.987506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.139 starting I/O failed: -6 00:29:46.139 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 [2024-12-13 03:40:47.001708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.140 NVMe io qpair process completion error 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 [2024-12-13 03:40:47.003411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 [2024-12-13 03:40:47.005285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 Write completed with error (sct=0, sc=8) 00:29:46.140 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 [2024-12-13 03:40:47.007839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 Write completed with error (sct=0, sc=8) 00:29:46.141 starting I/O failed: -6 00:29:46.141 [2024-12-13 03:40:47.025940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.141 NVMe io qpair process completion error 00:29:46.141 Initializing NVMe Controllers 00:29:46.141 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:29:46.141 Controller IO queue size 128, less than required. 00:29:46.141 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:46.141 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:29:46.141 Controller IO queue size 128, less than required. 00:29:46.141 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:46.141 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:29:46.141 Controller IO queue size 128, less than required. 00:29:46.141 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:46.141 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:46.141 Controller IO queue size 128, less than required. 00:29:46.141 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:46.141 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:29:46.141 Controller IO queue size 128, less than required. 00:29:46.141 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:46.141 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:29:46.141 Controller IO queue size 128, less than required. 00:29:46.141 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:46.141 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:29:46.141 Controller IO queue size 128, less than required. 00:29:46.141 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:46.141 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:29:46.141 Controller IO queue size 128, less than required. 00:29:46.141 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:46.141 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:29:46.141 Controller IO queue size 128, less than required. 00:29:46.141 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:46.141 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:29:46.141 Controller IO queue size 128, less than required. 00:29:46.141 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:46.141 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:29:46.141 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:29:46.141 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:29:46.141 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:46.141 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:29:46.141 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:29:46.141 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:29:46.142 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:29:46.142 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:29:46.142 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:29:46.142 Initialization complete. Launching workers. 00:29:46.142 ======================================================== 00:29:46.142 Latency(us) 00:29:46.142 Device Information : IOPS MiB/s Average min max 00:29:46.142 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1843.40 79.21 69439.66 1175.86 229889.29 00:29:46.142 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1827.23 78.51 70171.63 1276.52 243581.14 00:29:46.142 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1846.59 79.35 69595.70 1903.97 260997.98 00:29:46.142 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1822.77 78.32 67743.70 2012.11 149515.04 00:29:46.142 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1798.74 77.29 68772.56 1542.45 143621.26 00:29:46.142 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1771.94 76.14 69929.49 1229.36 138997.32 00:29:46.142 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1760.46 75.64 70577.30 1992.91 132032.77 00:29:46.142 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1773.64 76.21 70227.19 1926.80 149833.37 00:29:46.142 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1768.33 75.98 70614.02 1436.03 167285.76 00:29:46.142 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1749.40 75.17 71600.82 1866.54 214019.09 00:29:46.142 ======================================================== 00:29:46.142 Total : 17962.49 771.83 69854.93 1175.86 260997.98 00:29:46.142 00:29:46.142 [2024-12-13 03:40:47.058755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001e800 is same with the state(6) to be set 00:29:46.142 [2024-12-13 03:40:47.058819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001f700 is same with the state(6) to be set 00:29:46.142 [2024-12-13 03:40:47.058862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001ed00 is same with the state(6) to be set 00:29:46.142 [2024-12-13 03:40:47.058903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001de00 is same with the state(6) to be set 00:29:46.142 [2024-12-13 03:40:47.058953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001f200 is same with the state(6) to be set 00:29:46.142 [2024-12-13 03:40:47.058998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001e300 is same with the state(6) to be set 00:29:46.142 [2024-12-13 03:40:47.059040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(6) to be set 00:29:46.142 [2024-12-13 03:40:47.059094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fc00 is same with the state(6) to be set 00:29:46.142 [2024-12-13 03:40:47.059135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020600 is same with the state(6) to be set 00:29:46.142 [2024-12-13 03:40:47.059175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020b00 is same with the state(6) to be set 00:29:46.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:49.433 03:40:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:29:50.002 03:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2797268 00:29:50.002 03:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:29:50.002 03:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2797268 00:29:50.002 03:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:50.002 03:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:50.002 03:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:29:50.002 03:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:50.002 03:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2797268 00:29:50.002 03:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:29:50.002 03:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:50.002 03:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:50.002 03:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:50.002 03:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:29:50.002 03:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:50.002 03:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:50.002 03:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:50.002 03:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:50.002 03:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:50.002 03:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:29:50.002 03:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:50.002 03:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:29:50.002 03:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:50.002 03:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:50.002 rmmod nvme_tcp 00:29:50.002 rmmod nvme_fabrics 00:29:50.002 rmmod nvme_keyring 00:29:50.002 03:40:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:50.002 03:40:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:29:50.002 03:40:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:29:50.002 03:40:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2796963 ']' 00:29:50.002 03:40:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2796963 00:29:50.002 03:40:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2796963 ']' 00:29:50.002 03:40:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2796963 00:29:50.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2796963) - No such process 00:29:50.002 03:40:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2796963 is not found' 00:29:50.002 Process with pid 2796963 is not found 00:29:50.002 03:40:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:50.002 03:40:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:50.002 03:40:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:50.002 03:40:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:29:50.002 03:40:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:29:50.002 03:40:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:50.002 03:40:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:29:50.002 03:40:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:50.002 03:40:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:50.002 03:40:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.002 03:40:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:50.002 03:40:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:51.906 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:51.906 00:29:51.906 real 0m13.676s 00:29:51.906 user 0m39.652s 00:29:51.906 sys 0m5.061s 00:29:51.906 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:51.906 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:51.906 ************************************ 00:29:51.906 END TEST nvmf_shutdown_tc4 00:29:51.906 ************************************ 00:29:52.165 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:29:52.166 00:29:52.166 real 0m58.263s 00:29:52.166 user 2m50.248s 00:29:52.166 sys 0m14.510s 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:52.166 ************************************ 00:29:52.166 END TEST nvmf_shutdown 00:29:52.166 ************************************ 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:52.166 ************************************ 00:29:52.166 START TEST nvmf_nsid 00:29:52.166 ************************************ 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:52.166 * Looking for test storage... 00:29:52.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:52.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.166 --rc genhtml_branch_coverage=1 00:29:52.166 --rc genhtml_function_coverage=1 00:29:52.166 --rc genhtml_legend=1 00:29:52.166 --rc geninfo_all_blocks=1 00:29:52.166 --rc geninfo_unexecuted_blocks=1 00:29:52.166 00:29:52.166 ' 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:52.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.166 --rc genhtml_branch_coverage=1 00:29:52.166 --rc genhtml_function_coverage=1 00:29:52.166 --rc genhtml_legend=1 00:29:52.166 --rc geninfo_all_blocks=1 00:29:52.166 --rc geninfo_unexecuted_blocks=1 00:29:52.166 00:29:52.166 ' 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:52.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.166 --rc genhtml_branch_coverage=1 00:29:52.166 --rc genhtml_function_coverage=1 00:29:52.166 --rc genhtml_legend=1 00:29:52.166 --rc geninfo_all_blocks=1 00:29:52.166 --rc geninfo_unexecuted_blocks=1 00:29:52.166 00:29:52.166 ' 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:52.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.166 --rc genhtml_branch_coverage=1 00:29:52.166 --rc genhtml_function_coverage=1 00:29:52.166 --rc genhtml_legend=1 00:29:52.166 --rc geninfo_all_blocks=1 00:29:52.166 --rc geninfo_unexecuted_blocks=1 00:29:52.166 00:29:52.166 ' 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:52.166 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:52.425 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:52.425 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:52.425 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:52.425 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:52.425 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:52.425 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:52.425 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:52.425 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:52.425 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:52.425 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:52.425 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:52.425 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:52.425 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:52.425 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:52.425 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:52.425 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:29:52.426 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:52.426 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:52.426 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:52.426 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.426 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.426 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.426 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:29:52.426 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.426 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:29:52.426 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:52.426 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:52.426 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:52.426 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:52.426 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:52.426 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:52.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:52.426 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:52.426 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:52.426 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:52.426 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:29:52.426 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:29:52.426 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:29:52.426 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:29:52.426 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:29:52.426 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:29:52.426 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:52.426 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:52.426 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:52.426 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:52.426 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:52.426 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.426 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:52.426 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.426 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:52.426 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:52.426 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:29:52.426 03:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:57.701 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:57.701 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:57.701 Found net devices under 0000:af:00.0: cvl_0_0 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:57.701 Found net devices under 0000:af:00.1: cvl_0_1 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:57.701 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:57.961 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:57.961 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:57.961 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:57.961 03:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:57.961 03:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:57.961 03:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:57.961 03:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:57.961 03:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:57.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:57.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:29:57.961 00:29:57.961 --- 10.0.0.2 ping statistics --- 00:29:57.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.961 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:29:57.961 03:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:57.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:57.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:29:57.961 00:29:57.961 --- 10.0.0.1 ping statistics --- 00:29:57.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.961 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:29:57.961 03:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:57.961 03:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:29:57.961 03:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:57.961 03:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:57.961 03:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:57.961 03:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:57.961 03:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:57.961 03:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:57.961 03:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:57.961 03:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:29:57.961 03:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:57.961 03:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:57.961 03:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:57.961 03:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2802088 00:29:57.961 03:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2802088 00:29:57.961 03:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:29:57.961 03:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2802088 ']' 00:29:57.961 03:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.961 03:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:57.961 03:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.961 03:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:57.961 03:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:58.221 [2024-12-13 03:40:59.170201] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:58.221 [2024-12-13 03:40:59.170290] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:58.221 [2024-12-13 03:40:59.285389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.221 [2024-12-13 03:40:59.389548] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:58.221 [2024-12-13 03:40:59.389596] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:58.221 [2024-12-13 03:40:59.389608] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:58.221 [2024-12-13 03:40:59.389619] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:58.221 [2024-12-13 03:40:59.389627] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:58.221 [2024-12-13 03:40:59.390967] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.789 03:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:58.789 03:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:58.789 03:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:58.789 03:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:58.789 03:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:59.049 03:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:59.049 03:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:59.049 03:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2802323 00:29:59.049 03:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:29:59.049 03:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:29:59.049 03:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:29:59.049 03:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:29:59.049 03:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:59.049 03:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:59.049 03:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:59.049 03:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:59.049 03:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:59.049 03:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:59.049 03:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:59.049 03:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:59.049 03:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:59.049 03:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:29:59.049 03:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:29:59.049 03:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=a2cd86a9-0bcc-474c-b61d-6a930045fbb5 00:29:59.049 03:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:29:59.049 03:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=a911bd62-ba92-4d9e-9fd7-7673a5be7dc9 00:29:59.049 03:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:29:59.049 03:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=99237b3c-b15b-4cac-bd3f-ff68c8e80a1d 00:29:59.049 03:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:29:59.049 03:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.049 03:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:59.049 null0 00:29:59.049 null1 00:29:59.049 null2 00:29:59.049 [2024-12-13 03:41:00.064043] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:59.049 [2024-12-13 03:41:00.088293] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:59.049 [2024-12-13 03:41:00.090764] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:59.049 [2024-12-13 03:41:00.090848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2802323 ] 00:29:59.049 03:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.049 03:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2802323 /var/tmp/tgt2.sock 00:29:59.049 03:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2802323 ']' 00:29:59.049 03:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:29:59.049 03:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:59.049 03:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:29:59.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:29:59.050 03:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:59.050 03:41:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:59.050 [2024-12-13 03:41:00.203712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.308 [2024-12-13 03:41:00.316915] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:00.246 03:41:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:00.246 03:41:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:30:00.246 03:41:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:30:00.504 [2024-12-13 03:41:01.455871] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:00.504 [2024-12-13 03:41:01.472014] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:30:00.504 nvme0n1 nvme0n2 00:30:00.504 nvme1n1 00:30:00.505 03:41:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:30:00.505 03:41:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:30:00.505 03:41:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:01.441 03:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:30:01.441 03:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:30:01.441 03:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:30:01.441 03:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:30:01.441 03:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:30:01.441 03:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:30:01.441 03:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:30:01.441 03:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:01.441 03:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:01.441 03:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:30:01.441 03:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:30:01.441 03:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:30:01.441 03:41:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:30:02.818 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:02.818 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:30:02.818 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:02.818 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:30:02.818 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:02.818 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid a2cd86a9-0bcc-474c-b61d-6a930045fbb5 00:30:02.818 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:02.818 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:30:02.818 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:30:02.818 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:30:02.818 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:02.818 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=a2cd86a90bcc474cb61d6a930045fbb5 00:30:02.819 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo A2CD86A90BCC474CB61D6A930045FBB5 00:30:02.819 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ A2CD86A90BCC474CB61D6A930045FBB5 == \A\2\C\D\8\6\A\9\0\B\C\C\4\7\4\C\B\6\1\D\6\A\9\3\0\0\4\5\F\B\B\5 ]] 00:30:02.819 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:30:02.819 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:02.819 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:02.819 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:30:02.819 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:02.819 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:30:02.819 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:02.819 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid a911bd62-ba92-4d9e-9fd7-7673a5be7dc9 00:30:02.819 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:02.819 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:30:02.819 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:30:02.819 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:30:02.819 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:02.819 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=a911bd62ba924d9e9fd77673a5be7dc9 00:30:02.819 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo A911BD62BA924D9E9FD77673A5BE7DC9 00:30:02.819 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ A911BD62BA924D9E9FD77673A5BE7DC9 == \A\9\1\1\B\D\6\2\B\A\9\2\4\D\9\E\9\F\D\7\7\6\7\3\A\5\B\E\7\D\C\9 ]] 00:30:02.819 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:30:02.819 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:02.819 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:02.819 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:30:02.819 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:02.819 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:30:02.819 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:02.819 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 99237b3c-b15b-4cac-bd3f-ff68c8e80a1d 00:30:02.819 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:02.819 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:30:02.819 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:30:02.819 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:30:02.819 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:02.819 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=99237b3cb15b4cacbd3fff68c8e80a1d 00:30:02.819 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 99237B3CB15B4CACBD3FFF68C8E80A1D 00:30:02.819 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 99237B3CB15B4CACBD3FFF68C8E80A1D == \9\9\2\3\7\B\3\C\B\1\5\B\4\C\A\C\B\D\3\F\F\F\6\8\C\8\E\8\0\A\1\D ]] 00:30:02.819 03:41:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:30:03.184 03:41:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:30:03.184 03:41:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:30:03.184 03:41:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2802323 00:30:03.184 03:41:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2802323 ']' 00:30:03.184 03:41:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2802323 00:30:03.184 03:41:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:30:03.184 03:41:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:03.184 03:41:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2802323 00:30:03.184 03:41:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:03.184 03:41:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:03.184 03:41:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2802323' 00:30:03.184 killing process with pid 2802323 00:30:03.184 03:41:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2802323 00:30:03.184 03:41:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2802323 00:30:05.768 03:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:30:05.768 03:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:05.768 03:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:30:05.768 03:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:05.768 03:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:30:05.768 03:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:05.768 03:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:05.768 rmmod nvme_tcp 00:30:05.768 rmmod nvme_fabrics 00:30:05.768 rmmod nvme_keyring 00:30:05.768 03:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:05.768 03:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:30:05.768 03:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:30:05.768 03:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2802088 ']' 00:30:05.768 03:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2802088 00:30:05.768 03:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2802088 ']' 00:30:05.768 03:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2802088 00:30:05.768 03:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:30:05.768 03:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:05.768 03:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2802088 00:30:05.768 03:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:05.768 03:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:05.768 03:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2802088' 00:30:05.768 killing process with pid 2802088 00:30:05.768 03:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2802088 00:30:05.768 03:41:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2802088 00:30:06.705 03:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:06.705 03:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:06.705 03:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:06.705 03:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:30:06.705 03:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:06.705 03:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:30:06.705 03:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:30:06.705 03:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:06.705 03:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:06.705 03:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.705 03:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:06.705 03:41:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.610 03:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:08.610 00:30:08.610 real 0m16.589s 00:30:08.610 user 0m16.978s 00:30:08.610 sys 0m5.588s 00:30:08.610 03:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:08.610 03:41:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:08.610 ************************************ 00:30:08.610 END TEST nvmf_nsid 00:30:08.610 ************************************ 00:30:08.870 03:41:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:08.870 00:30:08.870 real 18m41.285s 00:30:08.870 user 49m44.944s 00:30:08.870 sys 4m1.283s 00:30:08.870 03:41:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:08.870 03:41:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:08.870 ************************************ 00:30:08.870 END TEST nvmf_target_extra 00:30:08.870 ************************************ 00:30:08.870 03:41:09 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:08.870 03:41:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:08.870 03:41:09 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:08.870 03:41:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:08.870 ************************************ 00:30:08.870 START TEST nvmf_host 00:30:08.870 ************************************ 00:30:08.870 03:41:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:08.870 * Looking for test storage... 00:30:08.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:08.870 03:41:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:08.870 03:41:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:30:08.870 03:41:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:08.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.870 --rc genhtml_branch_coverage=1 00:30:08.870 --rc genhtml_function_coverage=1 00:30:08.870 --rc genhtml_legend=1 00:30:08.870 --rc geninfo_all_blocks=1 00:30:08.870 --rc geninfo_unexecuted_blocks=1 00:30:08.870 00:30:08.870 ' 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:08.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.870 --rc genhtml_branch_coverage=1 00:30:08.870 --rc genhtml_function_coverage=1 00:30:08.870 --rc genhtml_legend=1 00:30:08.870 --rc geninfo_all_blocks=1 00:30:08.870 --rc geninfo_unexecuted_blocks=1 00:30:08.870 00:30:08.870 ' 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:08.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.870 --rc genhtml_branch_coverage=1 00:30:08.870 --rc genhtml_function_coverage=1 00:30:08.870 --rc genhtml_legend=1 00:30:08.870 --rc geninfo_all_blocks=1 00:30:08.870 --rc geninfo_unexecuted_blocks=1 00:30:08.870 00:30:08.870 ' 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:08.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.870 --rc genhtml_branch_coverage=1 00:30:08.870 --rc genhtml_function_coverage=1 00:30:08.870 --rc genhtml_legend=1 00:30:08.870 --rc geninfo_all_blocks=1 00:30:08.870 --rc geninfo_unexecuted_blocks=1 00:30:08.870 00:30:08.870 ' 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:08.870 03:41:10 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:09.130 03:41:10 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:09.130 03:41:10 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:09.130 03:41:10 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:09.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.131 ************************************ 00:30:09.131 START TEST nvmf_multicontroller 00:30:09.131 ************************************ 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:09.131 * Looking for test storage... 00:30:09.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:09.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.131 --rc genhtml_branch_coverage=1 00:30:09.131 --rc genhtml_function_coverage=1 00:30:09.131 --rc genhtml_legend=1 00:30:09.131 --rc geninfo_all_blocks=1 00:30:09.131 --rc geninfo_unexecuted_blocks=1 00:30:09.131 00:30:09.131 ' 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:09.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.131 --rc genhtml_branch_coverage=1 00:30:09.131 --rc genhtml_function_coverage=1 00:30:09.131 --rc genhtml_legend=1 00:30:09.131 --rc geninfo_all_blocks=1 00:30:09.131 --rc geninfo_unexecuted_blocks=1 00:30:09.131 00:30:09.131 ' 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:09.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.131 --rc genhtml_branch_coverage=1 00:30:09.131 --rc genhtml_function_coverage=1 00:30:09.131 --rc genhtml_legend=1 00:30:09.131 --rc geninfo_all_blocks=1 00:30:09.131 --rc geninfo_unexecuted_blocks=1 00:30:09.131 00:30:09.131 ' 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:09.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.131 --rc genhtml_branch_coverage=1 00:30:09.131 --rc genhtml_function_coverage=1 00:30:09.131 --rc genhtml_legend=1 00:30:09.131 --rc geninfo_all_blocks=1 00:30:09.131 --rc geninfo_unexecuted_blocks=1 00:30:09.131 00:30:09.131 ' 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:09.131 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:09.132 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.132 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.132 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.132 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:30:09.132 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.132 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:30:09.132 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:09.132 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:09.132 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:09.132 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:09.132 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:09.132 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:09.132 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:09.132 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:09.132 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:09.132 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:09.132 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:09.132 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:09.132 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:30:09.132 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:30:09.132 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:09.132 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:30:09.132 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:30:09.132 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:09.132 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:09.132 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:09.132 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:09.132 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:09.132 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.132 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:09.132 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.132 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:09.132 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:09.132 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:30:09.132 03:41:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:14.408 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:14.408 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:14.408 Found net devices under 0000:af:00.0: cvl_0_0 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:14.408 Found net devices under 0000:af:00.1: cvl_0_1 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:14.408 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:14.667 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:14.668 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:14.668 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:14.668 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:14.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:14.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:30:14.668 00:30:14.668 --- 10.0.0.2 ping statistics --- 00:30:14.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.668 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:30:14.668 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:14.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:14.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:30:14.668 00:30:14.668 --- 10.0.0.1 ping statistics --- 00:30:14.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.668 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:30:14.668 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:14.668 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:30:14.668 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:14.668 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:14.668 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:14.668 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:14.668 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:14.668 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:14.668 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:14.668 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:30:14.668 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:14.668 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:14.668 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:14.668 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2807037 00:30:14.668 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:14.668 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2807037 00:30:14.668 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2807037 ']' 00:30:14.668 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:14.668 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:14.668 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:14.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:14.668 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:14.668 03:41:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:14.668 [2024-12-13 03:41:15.816687] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:14.668 [2024-12-13 03:41:15.816776] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:14.927 [2024-12-13 03:41:15.932062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:14.927 [2024-12-13 03:41:16.037011] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:14.927 [2024-12-13 03:41:16.037055] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:14.927 [2024-12-13 03:41:16.037066] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:14.927 [2024-12-13 03:41:16.037076] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:14.927 [2024-12-13 03:41:16.037084] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:14.927 [2024-12-13 03:41:16.039261] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:30:14.927 [2024-12-13 03:41:16.039328] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.927 [2024-12-13 03:41:16.039336] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:30:15.495 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:15.495 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:30:15.495 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:15.495 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:15.495 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.495 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:15.495 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:15.495 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.495 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.495 [2024-12-13 03:41:16.651419] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:15.495 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.495 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:15.495 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.495 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.754 Malloc0 00:30:15.754 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.754 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:15.754 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.754 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.755 [2024-12-13 03:41:16.771612] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.755 [2024-12-13 03:41:16.783583] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.755 Malloc1 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2807276 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2807276 /var/tmp/bdevperf.sock 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2807276 ']' 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:15.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:15.755 03:41:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:16.692 03:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:16.692 03:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:30:16.692 03:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:16.692 03:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.692 03:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:16.952 NVMe0n1 00:30:16.952 03:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.952 03:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:16.952 03:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:30:16.952 03:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.952 03:41:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:16.952 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.952 1 00:30:16.952 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:16.952 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:16.952 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:16.952 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:16.952 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:16.952 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:16.952 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:16.952 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:16.952 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.952 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:16.952 request: 00:30:16.952 { 00:30:16.952 "name": "NVMe0", 00:30:16.952 "trtype": "tcp", 00:30:16.952 "traddr": "10.0.0.2", 00:30:16.952 "adrfam": "ipv4", 00:30:16.952 "trsvcid": "4420", 00:30:16.952 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:16.952 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:30:16.952 "hostaddr": "10.0.0.1", 00:30:16.952 "prchk_reftag": false, 00:30:16.952 "prchk_guard": false, 00:30:16.952 "hdgst": false, 00:30:16.952 "ddgst": false, 00:30:16.952 "allow_unrecognized_csi": false, 00:30:16.952 "method": "bdev_nvme_attach_controller", 00:30:16.952 "req_id": 1 00:30:16.952 } 00:30:16.952 Got JSON-RPC error response 00:30:16.952 response: 00:30:16.952 { 00:30:16.952 "code": -114, 00:30:16.952 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:16.952 } 00:30:16.952 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:16.952 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:16.952 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:16.952 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:16.952 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:16.952 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:16.952 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:16.952 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:16.952 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:16.952 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:16.952 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:16.952 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:16.952 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:16.952 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.952 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:16.952 request: 00:30:16.952 { 00:30:16.952 "name": "NVMe0", 00:30:16.952 "trtype": "tcp", 00:30:16.952 "traddr": "10.0.0.2", 00:30:16.952 "adrfam": "ipv4", 00:30:16.952 "trsvcid": "4420", 00:30:16.952 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:16.952 "hostaddr": "10.0.0.1", 00:30:16.952 "prchk_reftag": false, 00:30:16.952 "prchk_guard": false, 00:30:16.952 "hdgst": false, 00:30:16.952 "ddgst": false, 00:30:16.952 "allow_unrecognized_csi": false, 00:30:16.953 "method": "bdev_nvme_attach_controller", 00:30:16.953 "req_id": 1 00:30:16.953 } 00:30:16.953 Got JSON-RPC error response 00:30:16.953 response: 00:30:16.953 { 00:30:16.953 "code": -114, 00:30:16.953 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:16.953 } 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:16.953 request: 00:30:16.953 { 00:30:16.953 "name": "NVMe0", 00:30:16.953 "trtype": "tcp", 00:30:16.953 "traddr": "10.0.0.2", 00:30:16.953 "adrfam": "ipv4", 00:30:16.953 "trsvcid": "4420", 00:30:16.953 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:16.953 "hostaddr": "10.0.0.1", 00:30:16.953 "prchk_reftag": false, 00:30:16.953 "prchk_guard": false, 00:30:16.953 "hdgst": false, 00:30:16.953 "ddgst": false, 00:30:16.953 "multipath": "disable", 00:30:16.953 "allow_unrecognized_csi": false, 00:30:16.953 "method": "bdev_nvme_attach_controller", 00:30:16.953 "req_id": 1 00:30:16.953 } 00:30:16.953 Got JSON-RPC error response 00:30:16.953 response: 00:30:16.953 { 00:30:16.953 "code": -114, 00:30:16.953 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:30:16.953 } 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:16.953 request: 00:30:16.953 { 00:30:16.953 "name": "NVMe0", 00:30:16.953 "trtype": "tcp", 00:30:16.953 "traddr": "10.0.0.2", 00:30:16.953 "adrfam": "ipv4", 00:30:16.953 "trsvcid": "4420", 00:30:16.953 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:16.953 "hostaddr": "10.0.0.1", 00:30:16.953 "prchk_reftag": false, 00:30:16.953 "prchk_guard": false, 00:30:16.953 "hdgst": false, 00:30:16.953 "ddgst": false, 00:30:16.953 "multipath": "failover", 00:30:16.953 "allow_unrecognized_csi": false, 00:30:16.953 "method": "bdev_nvme_attach_controller", 00:30:16.953 "req_id": 1 00:30:16.953 } 00:30:16.953 Got JSON-RPC error response 00:30:16.953 response: 00:30:16.953 { 00:30:16.953 "code": -114, 00:30:16.953 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:16.953 } 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.953 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.213 NVMe0n1 00:30:17.213 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.213 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:17.213 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.213 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.213 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.213 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:17.213 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.213 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.472 00:30:17.472 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.472 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:17.472 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:30:17.472 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.472 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:17.472 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.472 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:30:17.472 03:41:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:18.409 { 00:30:18.409 "results": [ 00:30:18.409 { 00:30:18.409 "job": "NVMe0n1", 00:30:18.409 "core_mask": "0x1", 00:30:18.409 "workload": "write", 00:30:18.409 "status": "finished", 00:30:18.409 "queue_depth": 128, 00:30:18.409 "io_size": 4096, 00:30:18.409 "runtime": 1.008178, 00:30:18.409 "iops": 21481.32571827594, 00:30:18.409 "mibps": 83.9114285870154, 00:30:18.409 "io_failed": 0, 00:30:18.409 "io_timeout": 0, 00:30:18.409 "avg_latency_us": 5947.3851192510065, 00:30:18.409 "min_latency_us": 3651.2914285714287, 00:30:18.409 "max_latency_us": 10860.251428571428 00:30:18.409 } 00:30:18.409 ], 00:30:18.409 "core_count": 1 00:30:18.409 } 00:30:18.409 03:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:30:18.409 03:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.409 03:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:18.668 03:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.668 03:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:30:18.668 03:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2807276 00:30:18.668 03:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2807276 ']' 00:30:18.668 03:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2807276 00:30:18.668 03:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:30:18.668 03:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:18.668 03:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2807276 00:30:18.668 03:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:18.668 03:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:18.668 03:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2807276' 00:30:18.668 killing process with pid 2807276 00:30:18.668 03:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2807276 00:30:18.668 03:41:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2807276 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:30:19.606 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:19.606 [2024-12-13 03:41:16.969097] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:19.606 [2024-12-13 03:41:16.969202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2807276 ] 00:30:19.606 [2024-12-13 03:41:17.083275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.606 [2024-12-13 03:41:17.195168] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.606 [2024-12-13 03:41:18.447965] bdev.c:4957:bdev_name_add: *ERROR*: Bdev name a9de65c9-8732-4d02-a718-7d8418962123 already exists 00:30:19.606 [2024-12-13 03:41:18.448008] bdev.c:8177:bdev_register: *ERROR*: Unable to add uuid:a9de65c9-8732-4d02-a718-7d8418962123 alias for bdev NVMe1n1 00:30:19.606 [2024-12-13 03:41:18.448022] bdev_nvme.c:4666:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:30:19.606 Running I/O for 1 seconds... 00:30:19.606 21419.00 IOPS, 83.67 MiB/s 00:30:19.606 Latency(us) 00:30:19.606 [2024-12-13T02:41:20.815Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:19.606 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:30:19.606 NVMe0n1 : 1.01 21481.33 83.91 0.00 0.00 5947.39 3651.29 10860.25 00:30:19.606 [2024-12-13T02:41:20.815Z] =================================================================================================================== 00:30:19.606 [2024-12-13T02:41:20.815Z] Total : 21481.33 83.91 0.00 0.00 5947.39 3651.29 10860.25 00:30:19.606 Received shutdown signal, test time was about 1.000000 seconds 00:30:19.606 00:30:19.606 Latency(us) 00:30:19.606 [2024-12-13T02:41:20.815Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:19.606 [2024-12-13T02:41:20.815Z] =================================================================================================================== 00:30:19.606 [2024-12-13T02:41:20.815Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:19.606 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:19.606 rmmod nvme_tcp 00:30:19.606 rmmod nvme_fabrics 00:30:19.606 rmmod nvme_keyring 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2807037 ']' 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2807037 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2807037 ']' 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2807037 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2807037 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2807037' 00:30:19.606 killing process with pid 2807037 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2807037 00:30:19.606 03:41:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2807037 00:30:21.511 03:41:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:21.511 03:41:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:21.511 03:41:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:21.511 03:41:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:30:21.511 03:41:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:30:21.511 03:41:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:21.511 03:41:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:30:21.511 03:41:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:21.511 03:41:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:21.511 03:41:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:21.511 03:41:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:21.511 03:41:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:23.418 00:30:23.418 real 0m14.210s 00:30:23.418 user 0m23.656s 00:30:23.418 sys 0m5.034s 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:23.418 ************************************ 00:30:23.418 END TEST nvmf_multicontroller 00:30:23.418 ************************************ 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.418 ************************************ 00:30:23.418 START TEST nvmf_aer 00:30:23.418 ************************************ 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:23.418 * Looking for test storage... 00:30:23.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:23.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.418 --rc genhtml_branch_coverage=1 00:30:23.418 --rc genhtml_function_coverage=1 00:30:23.418 --rc genhtml_legend=1 00:30:23.418 --rc geninfo_all_blocks=1 00:30:23.418 --rc geninfo_unexecuted_blocks=1 00:30:23.418 00:30:23.418 ' 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:23.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.418 --rc genhtml_branch_coverage=1 00:30:23.418 --rc genhtml_function_coverage=1 00:30:23.418 --rc genhtml_legend=1 00:30:23.418 --rc geninfo_all_blocks=1 00:30:23.418 --rc geninfo_unexecuted_blocks=1 00:30:23.418 00:30:23.418 ' 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:23.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.418 --rc genhtml_branch_coverage=1 00:30:23.418 --rc genhtml_function_coverage=1 00:30:23.418 --rc genhtml_legend=1 00:30:23.418 --rc geninfo_all_blocks=1 00:30:23.418 --rc geninfo_unexecuted_blocks=1 00:30:23.418 00:30:23.418 ' 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:23.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.418 --rc genhtml_branch_coverage=1 00:30:23.418 --rc genhtml_function_coverage=1 00:30:23.418 --rc genhtml_legend=1 00:30:23.418 --rc geninfo_all_blocks=1 00:30:23.418 --rc geninfo_unexecuted_blocks=1 00:30:23.418 00:30:23.418 ' 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:23.418 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:23.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:30:23.419 03:41:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:28.695 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:28.695 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:28.695 Found net devices under 0000:af:00.0: cvl_0_0 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:28.695 Found net devices under 0000:af:00.1: cvl_0_1 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:28.695 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:28.696 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:28.696 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:28.696 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:28.696 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:28.696 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:28.696 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:28.696 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:28.696 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:28.696 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:28.696 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:28.696 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:28.696 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:28.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:28.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:30:28.696 00:30:28.696 --- 10.0.0.2 ping statistics --- 00:30:28.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.696 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:30:28.696 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:28.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:28.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:30:28.696 00:30:28.696 --- 10.0.0.1 ping statistics --- 00:30:28.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.696 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:30:28.696 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:28.696 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:30:28.696 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:28.696 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:28.696 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:28.696 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:28.696 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:28.696 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:28.696 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:28.696 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:30:28.696 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:28.696 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:28.696 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:28.696 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2811431 00:30:28.696 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2811431 00:30:28.696 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2811431 ']' 00:30:28.696 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:28.696 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:28.696 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:28.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:28.696 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:28.696 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:28.696 03:41:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:28.696 [2024-12-13 03:41:29.862755] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:28.696 [2024-12-13 03:41:29.862843] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:28.955 [2024-12-13 03:41:29.980236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:28.955 [2024-12-13 03:41:30.098354] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:28.955 [2024-12-13 03:41:30.098400] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:28.955 [2024-12-13 03:41:30.098411] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:28.955 [2024-12-13 03:41:30.098421] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:28.955 [2024-12-13 03:41:30.098438] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:28.955 [2024-12-13 03:41:30.100758] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:28.955 [2024-12-13 03:41:30.100832] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:30:28.955 [2024-12-13 03:41:30.100895] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:28.955 [2024-12-13 03:41:30.100905] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:30:29.524 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:29.524 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:30:29.524 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:29.524 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:29.524 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:29.524 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:29.524 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:29.524 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.524 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:29.524 [2024-12-13 03:41:30.722312] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:29.784 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.784 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:30:29.784 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.784 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:29.784 Malloc0 00:30:29.784 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.784 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:30:29.784 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.784 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:29.784 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.784 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:29.784 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.784 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:29.784 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.784 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:29.784 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.784 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:29.784 [2024-12-13 03:41:30.845860] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:29.784 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.784 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:30:29.784 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.784 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:29.784 [ 00:30:29.784 { 00:30:29.784 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:29.784 "subtype": "Discovery", 00:30:29.784 "listen_addresses": [], 00:30:29.784 "allow_any_host": true, 00:30:29.784 "hosts": [] 00:30:29.784 }, 00:30:29.784 { 00:30:29.784 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:29.784 "subtype": "NVMe", 00:30:29.784 "listen_addresses": [ 00:30:29.784 { 00:30:29.784 "trtype": "TCP", 00:30:29.784 "adrfam": "IPv4", 00:30:29.784 "traddr": "10.0.0.2", 00:30:29.784 "trsvcid": "4420" 00:30:29.784 } 00:30:29.784 ], 00:30:29.784 "allow_any_host": true, 00:30:29.784 "hosts": [], 00:30:29.784 "serial_number": "SPDK00000000000001", 00:30:29.784 "model_number": "SPDK bdev Controller", 00:30:29.784 "max_namespaces": 2, 00:30:29.784 "min_cntlid": 1, 00:30:29.784 "max_cntlid": 65519, 00:30:29.784 "namespaces": [ 00:30:29.784 { 00:30:29.784 "nsid": 1, 00:30:29.784 "bdev_name": "Malloc0", 00:30:29.784 "name": "Malloc0", 00:30:29.784 "nguid": "BD6E4DB535444F26B405E21F0F298FD9", 00:30:29.784 "uuid": "bd6e4db5-3544-4f26-b405-e21f0f298fd9" 00:30:29.784 } 00:30:29.784 ] 00:30:29.784 } 00:30:29.784 ] 00:30:29.784 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.784 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:30:29.784 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:30:29.784 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2811673 00:30:29.784 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:30:29.784 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:30:29.784 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:30:29.784 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:29.784 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:30:29.784 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:30:29.784 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:29.784 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:29.784 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:30:29.784 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:30:29.784 03:41:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:30.043 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:30.043 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:30:30.043 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:30:30.043 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:30.043 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:30.043 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 3 -lt 200 ']' 00:30:30.043 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=4 00:30:30.043 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:30.302 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:30.302 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:30.302 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:30:30.302 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:30:30.302 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.302 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:30.302 Malloc1 00:30:30.302 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.302 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:30:30.302 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.302 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:30.302 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.302 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:30:30.302 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.302 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:30.302 [ 00:30:30.302 { 00:30:30.302 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:30.302 "subtype": "Discovery", 00:30:30.302 "listen_addresses": [], 00:30:30.302 "allow_any_host": true, 00:30:30.302 "hosts": [] 00:30:30.302 }, 00:30:30.302 { 00:30:30.302 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:30.302 "subtype": "NVMe", 00:30:30.302 "listen_addresses": [ 00:30:30.302 { 00:30:30.302 "trtype": "TCP", 00:30:30.302 "adrfam": "IPv4", 00:30:30.302 "traddr": "10.0.0.2", 00:30:30.302 "trsvcid": "4420" 00:30:30.302 } 00:30:30.302 ], 00:30:30.302 "allow_any_host": true, 00:30:30.302 "hosts": [], 00:30:30.302 "serial_number": "SPDK00000000000001", 00:30:30.302 "model_number": "SPDK bdev Controller", 00:30:30.302 "max_namespaces": 2, 00:30:30.302 "min_cntlid": 1, 00:30:30.302 "max_cntlid": 65519, 00:30:30.302 "namespaces": [ 00:30:30.302 { 00:30:30.302 "nsid": 1, 00:30:30.302 "bdev_name": "Malloc0", 00:30:30.302 "name": "Malloc0", 00:30:30.302 "nguid": "BD6E4DB535444F26B405E21F0F298FD9", 00:30:30.302 "uuid": "bd6e4db5-3544-4f26-b405-e21f0f298fd9" 00:30:30.302 }, 00:30:30.302 { 00:30:30.302 "nsid": 2, 00:30:30.302 "bdev_name": "Malloc1", 00:30:30.302 "name": "Malloc1", 00:30:30.302 "nguid": "1DF50917A9C747858A44405019A59C59", 00:30:30.302 "uuid": "1df50917-a9c7-4785-8a44-405019a59c59" 00:30:30.302 } 00:30:30.302 ] 00:30:30.302 } 00:30:30.302 ] 00:30:30.302 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.302 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2811673 00:30:30.302 Asynchronous Event Request test 00:30:30.302 Attaching to 10.0.0.2 00:30:30.302 Attached to 10.0.0.2 00:30:30.302 Registering asynchronous event callbacks... 00:30:30.302 Starting namespace attribute notice tests for all controllers... 00:30:30.302 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:30:30.302 aer_cb - Changed Namespace 00:30:30.302 Cleaning up... 00:30:30.302 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:30.302 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.302 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:30.561 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.561 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:30:30.561 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.561 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:30.820 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.820 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:30.820 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.820 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:30.820 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.820 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:30:30.820 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:30:30.820 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:30.820 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:30:30.820 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:30.820 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:30:30.820 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:30.820 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:30.820 rmmod nvme_tcp 00:30:30.820 rmmod nvme_fabrics 00:30:30.820 rmmod nvme_keyring 00:30:30.820 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:30.820 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:30:30.820 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:30:30.820 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2811431 ']' 00:30:30.820 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2811431 00:30:30.820 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2811431 ']' 00:30:30.820 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2811431 00:30:30.820 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:30:30.820 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:30.820 03:41:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2811431 00:30:30.820 03:41:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:30.820 03:41:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:30.820 03:41:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2811431' 00:30:30.820 killing process with pid 2811431 00:30:30.820 03:41:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2811431 00:30:30.820 03:41:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2811431 00:30:32.198 03:41:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:32.198 03:41:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:32.198 03:41:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:32.198 03:41:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:30:32.198 03:41:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:32.198 03:41:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:30:32.198 03:41:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:30:32.198 03:41:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:32.198 03:41:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:32.198 03:41:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:32.198 03:41:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:32.198 03:41:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:34.110 03:41:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:34.110 00:30:34.110 real 0m10.843s 00:30:34.110 user 0m12.753s 00:30:34.110 sys 0m4.552s 00:30:34.110 03:41:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:34.110 03:41:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:34.110 ************************************ 00:30:34.110 END TEST nvmf_aer 00:30:34.110 ************************************ 00:30:34.110 03:41:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:34.110 03:41:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:34.110 03:41:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:34.110 03:41:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.368 ************************************ 00:30:34.368 START TEST nvmf_async_init 00:30:34.368 ************************************ 00:30:34.368 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:34.368 * Looking for test storage... 00:30:34.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:34.368 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:34.368 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:30:34.368 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:34.368 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:34.368 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:34.368 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:34.368 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:34.368 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:30:34.368 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:30:34.368 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:34.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.369 --rc genhtml_branch_coverage=1 00:30:34.369 --rc genhtml_function_coverage=1 00:30:34.369 --rc genhtml_legend=1 00:30:34.369 --rc geninfo_all_blocks=1 00:30:34.369 --rc geninfo_unexecuted_blocks=1 00:30:34.369 00:30:34.369 ' 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:34.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.369 --rc genhtml_branch_coverage=1 00:30:34.369 --rc genhtml_function_coverage=1 00:30:34.369 --rc genhtml_legend=1 00:30:34.369 --rc geninfo_all_blocks=1 00:30:34.369 --rc geninfo_unexecuted_blocks=1 00:30:34.369 00:30:34.369 ' 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:34.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.369 --rc genhtml_branch_coverage=1 00:30:34.369 --rc genhtml_function_coverage=1 00:30:34.369 --rc genhtml_legend=1 00:30:34.369 --rc geninfo_all_blocks=1 00:30:34.369 --rc geninfo_unexecuted_blocks=1 00:30:34.369 00:30:34.369 ' 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:34.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.369 --rc genhtml_branch_coverage=1 00:30:34.369 --rc genhtml_function_coverage=1 00:30:34.369 --rc genhtml_legend=1 00:30:34.369 --rc geninfo_all_blocks=1 00:30:34.369 --rc geninfo_unexecuted_blocks=1 00:30:34.369 00:30:34.369 ' 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:34.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=a90357117b4e469abb3aefb77bfe0b71 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:34.369 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:30:34.370 03:41:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:39.641 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:39.641 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:39.641 Found net devices under 0000:af:00.0: cvl_0_0 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:39.641 Found net devices under 0000:af:00.1: cvl_0_1 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:39.641 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:39.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:39.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:30:39.642 00:30:39.642 --- 10.0.0.2 ping statistics --- 00:30:39.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:39.642 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:39.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:39.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:30:39.642 00:30:39.642 --- 10.0.0.1 ping statistics --- 00:30:39.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:39.642 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2815360 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2815360 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2815360 ']' 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:39.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:39.642 03:41:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:39.901 [2024-12-13 03:41:40.855954] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:39.901 [2024-12-13 03:41:40.856055] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:39.901 [2024-12-13 03:41:40.974802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:39.901 [2024-12-13 03:41:41.083815] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:39.901 [2024-12-13 03:41:41.083857] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:39.901 [2024-12-13 03:41:41.083868] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:39.901 [2024-12-13 03:41:41.083879] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:39.901 [2024-12-13 03:41:41.083887] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:39.901 [2024-12-13 03:41:41.085274] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:40.470 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:40.470 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:30:40.470 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:40.470 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:40.470 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.729 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:40.729 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:40.729 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.729 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.729 [2024-12-13 03:41:41.689404] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:40.729 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.729 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:30:40.729 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.729 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.729 null0 00:30:40.729 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.729 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:30:40.729 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.729 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.729 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.729 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:30:40.729 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.729 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.729 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.729 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g a90357117b4e469abb3aefb77bfe0b71 00:30:40.729 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.729 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.729 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.730 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:40.730 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.730 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.730 [2024-12-13 03:41:41.729664] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:40.730 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.730 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:30:40.730 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.730 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.989 nvme0n1 00:30:40.989 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.989 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:40.989 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.989 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.989 [ 00:30:40.989 { 00:30:40.989 "name": "nvme0n1", 00:30:40.989 "aliases": [ 00:30:40.989 "a9035711-7b4e-469a-bb3a-efb77bfe0b71" 00:30:40.989 ], 00:30:40.989 "product_name": "NVMe disk", 00:30:40.989 "block_size": 512, 00:30:40.989 "num_blocks": 2097152, 00:30:40.989 "uuid": "a9035711-7b4e-469a-bb3a-efb77bfe0b71", 00:30:40.989 "numa_id": 1, 00:30:40.989 "assigned_rate_limits": { 00:30:40.989 "rw_ios_per_sec": 0, 00:30:40.989 "rw_mbytes_per_sec": 0, 00:30:40.989 "r_mbytes_per_sec": 0, 00:30:40.989 "w_mbytes_per_sec": 0 00:30:40.989 }, 00:30:40.989 "claimed": false, 00:30:40.989 "zoned": false, 00:30:40.989 "supported_io_types": { 00:30:40.989 "read": true, 00:30:40.989 "write": true, 00:30:40.989 "unmap": false, 00:30:40.989 "flush": true, 00:30:40.989 "reset": true, 00:30:40.989 "nvme_admin": true, 00:30:40.989 "nvme_io": true, 00:30:40.989 "nvme_io_md": false, 00:30:40.989 "write_zeroes": true, 00:30:40.989 "zcopy": false, 00:30:40.989 "get_zone_info": false, 00:30:40.989 "zone_management": false, 00:30:40.989 "zone_append": false, 00:30:40.989 "compare": true, 00:30:40.989 "compare_and_write": true, 00:30:40.989 "abort": true, 00:30:40.989 "seek_hole": false, 00:30:40.989 "seek_data": false, 00:30:40.989 "copy": true, 00:30:40.989 "nvme_iov_md": false 00:30:40.989 }, 00:30:40.989 "memory_domains": [ 00:30:40.989 { 00:30:40.989 "dma_device_id": "system", 00:30:40.989 "dma_device_type": 1 00:30:40.989 } 00:30:40.989 ], 00:30:40.989 "driver_specific": { 00:30:40.989 "nvme": [ 00:30:40.989 { 00:30:40.989 "trid": { 00:30:40.989 "trtype": "TCP", 00:30:40.989 "adrfam": "IPv4", 00:30:40.989 "traddr": "10.0.0.2", 00:30:40.989 "trsvcid": "4420", 00:30:40.989 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:40.989 }, 00:30:40.989 "ctrlr_data": { 00:30:40.989 "cntlid": 1, 00:30:40.989 "vendor_id": "0x8086", 00:30:40.989 "model_number": "SPDK bdev Controller", 00:30:40.989 "serial_number": "00000000000000000000", 00:30:40.989 "firmware_revision": "25.01", 00:30:40.989 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:40.989 "oacs": { 00:30:40.989 "security": 0, 00:30:40.989 "format": 0, 00:30:40.989 "firmware": 0, 00:30:40.989 "ns_manage": 0 00:30:40.989 }, 00:30:40.989 "multi_ctrlr": true, 00:30:40.989 "ana_reporting": false 00:30:40.989 }, 00:30:40.989 "vs": { 00:30:40.989 "nvme_version": "1.3" 00:30:40.989 }, 00:30:40.989 "ns_data": { 00:30:40.989 "id": 1, 00:30:40.989 "can_share": true 00:30:40.989 } 00:30:40.989 } 00:30:40.989 ], 00:30:40.989 "mp_policy": "active_passive" 00:30:40.989 } 00:30:40.989 } 00:30:40.989 ] 00:30:40.989 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.989 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:30:40.989 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.989 03:41:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.989 [2024-12-13 03:41:41.980267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:40.989 [2024-12-13 03:41:41.980347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:30:40.989 [2024-12-13 03:41:42.112049] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:30:40.989 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.989 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:40.989 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.989 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.989 [ 00:30:40.989 { 00:30:40.989 "name": "nvme0n1", 00:30:40.989 "aliases": [ 00:30:40.989 "a9035711-7b4e-469a-bb3a-efb77bfe0b71" 00:30:40.989 ], 00:30:40.989 "product_name": "NVMe disk", 00:30:40.989 "block_size": 512, 00:30:40.989 "num_blocks": 2097152, 00:30:40.989 "uuid": "a9035711-7b4e-469a-bb3a-efb77bfe0b71", 00:30:40.989 "numa_id": 1, 00:30:40.989 "assigned_rate_limits": { 00:30:40.989 "rw_ios_per_sec": 0, 00:30:40.989 "rw_mbytes_per_sec": 0, 00:30:40.989 "r_mbytes_per_sec": 0, 00:30:40.989 "w_mbytes_per_sec": 0 00:30:40.989 }, 00:30:40.989 "claimed": false, 00:30:40.989 "zoned": false, 00:30:40.989 "supported_io_types": { 00:30:40.989 "read": true, 00:30:40.989 "write": true, 00:30:40.989 "unmap": false, 00:30:40.989 "flush": true, 00:30:40.989 "reset": true, 00:30:40.989 "nvme_admin": true, 00:30:40.989 "nvme_io": true, 00:30:40.989 "nvme_io_md": false, 00:30:40.990 "write_zeroes": true, 00:30:40.990 "zcopy": false, 00:30:40.990 "get_zone_info": false, 00:30:40.990 "zone_management": false, 00:30:40.990 "zone_append": false, 00:30:40.990 "compare": true, 00:30:40.990 "compare_and_write": true, 00:30:40.990 "abort": true, 00:30:40.990 "seek_hole": false, 00:30:40.990 "seek_data": false, 00:30:40.990 "copy": true, 00:30:40.990 "nvme_iov_md": false 00:30:40.990 }, 00:30:40.990 "memory_domains": [ 00:30:40.990 { 00:30:40.990 "dma_device_id": "system", 00:30:40.990 "dma_device_type": 1 00:30:40.990 } 00:30:40.990 ], 00:30:40.990 "driver_specific": { 00:30:40.990 "nvme": [ 00:30:40.990 { 00:30:40.990 "trid": { 00:30:40.990 "trtype": "TCP", 00:30:40.990 "adrfam": "IPv4", 00:30:40.990 "traddr": "10.0.0.2", 00:30:40.990 "trsvcid": "4420", 00:30:40.990 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:40.990 }, 00:30:40.990 "ctrlr_data": { 00:30:40.990 "cntlid": 2, 00:30:40.990 "vendor_id": "0x8086", 00:30:40.990 "model_number": "SPDK bdev Controller", 00:30:40.990 "serial_number": "00000000000000000000", 00:30:40.990 "firmware_revision": "25.01", 00:30:40.990 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:40.990 "oacs": { 00:30:40.990 "security": 0, 00:30:40.990 "format": 0, 00:30:40.990 "firmware": 0, 00:30:40.990 "ns_manage": 0 00:30:40.990 }, 00:30:40.990 "multi_ctrlr": true, 00:30:40.990 "ana_reporting": false 00:30:40.990 }, 00:30:40.990 "vs": { 00:30:40.990 "nvme_version": "1.3" 00:30:40.990 }, 00:30:40.990 "ns_data": { 00:30:40.990 "id": 1, 00:30:40.990 "can_share": true 00:30:40.990 } 00:30:40.990 } 00:30:40.990 ], 00:30:40.990 "mp_policy": "active_passive" 00:30:40.990 } 00:30:40.990 } 00:30:40.990 ] 00:30:40.990 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.990 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:40.990 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.990 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.990 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.990 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:30:40.990 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.NaOxHRq7YW 00:30:40.990 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:40.990 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.NaOxHRq7YW 00:30:40.990 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.NaOxHRq7YW 00:30:40.990 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.990 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.990 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.990 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:30:40.990 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.990 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.990 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.990 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:30:40.990 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.990 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.990 [2024-12-13 03:41:42.168879] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:40.990 [2024-12-13 03:41:42.169062] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:40.990 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.990 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:30:40.990 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.990 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.990 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.990 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:30:40.990 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.990 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:40.990 [2024-12-13 03:41:42.184932] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:41.249 nvme0n1 00:30:41.249 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.249 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:41.249 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.249 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:41.249 [ 00:30:41.249 { 00:30:41.249 "name": "nvme0n1", 00:30:41.249 "aliases": [ 00:30:41.249 "a9035711-7b4e-469a-bb3a-efb77bfe0b71" 00:30:41.249 ], 00:30:41.249 "product_name": "NVMe disk", 00:30:41.249 "block_size": 512, 00:30:41.249 "num_blocks": 2097152, 00:30:41.249 "uuid": "a9035711-7b4e-469a-bb3a-efb77bfe0b71", 00:30:41.249 "numa_id": 1, 00:30:41.249 "assigned_rate_limits": { 00:30:41.249 "rw_ios_per_sec": 0, 00:30:41.249 "rw_mbytes_per_sec": 0, 00:30:41.249 "r_mbytes_per_sec": 0, 00:30:41.249 "w_mbytes_per_sec": 0 00:30:41.249 }, 00:30:41.249 "claimed": false, 00:30:41.249 "zoned": false, 00:30:41.249 "supported_io_types": { 00:30:41.249 "read": true, 00:30:41.249 "write": true, 00:30:41.249 "unmap": false, 00:30:41.249 "flush": true, 00:30:41.249 "reset": true, 00:30:41.249 "nvme_admin": true, 00:30:41.249 "nvme_io": true, 00:30:41.249 "nvme_io_md": false, 00:30:41.249 "write_zeroes": true, 00:30:41.249 "zcopy": false, 00:30:41.249 "get_zone_info": false, 00:30:41.249 "zone_management": false, 00:30:41.249 "zone_append": false, 00:30:41.249 "compare": true, 00:30:41.249 "compare_and_write": true, 00:30:41.249 "abort": true, 00:30:41.249 "seek_hole": false, 00:30:41.249 "seek_data": false, 00:30:41.249 "copy": true, 00:30:41.249 "nvme_iov_md": false 00:30:41.249 }, 00:30:41.249 "memory_domains": [ 00:30:41.249 { 00:30:41.249 "dma_device_id": "system", 00:30:41.249 "dma_device_type": 1 00:30:41.249 } 00:30:41.249 ], 00:30:41.249 "driver_specific": { 00:30:41.249 "nvme": [ 00:30:41.249 { 00:30:41.249 "trid": { 00:30:41.249 "trtype": "TCP", 00:30:41.249 "adrfam": "IPv4", 00:30:41.249 "traddr": "10.0.0.2", 00:30:41.249 "trsvcid": "4421", 00:30:41.249 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:41.249 }, 00:30:41.249 "ctrlr_data": { 00:30:41.249 "cntlid": 3, 00:30:41.249 "vendor_id": "0x8086", 00:30:41.249 "model_number": "SPDK bdev Controller", 00:30:41.249 "serial_number": "00000000000000000000", 00:30:41.249 "firmware_revision": "25.01", 00:30:41.249 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:41.249 "oacs": { 00:30:41.249 "security": 0, 00:30:41.249 "format": 0, 00:30:41.249 "firmware": 0, 00:30:41.249 "ns_manage": 0 00:30:41.249 }, 00:30:41.249 "multi_ctrlr": true, 00:30:41.249 "ana_reporting": false 00:30:41.249 }, 00:30:41.249 "vs": { 00:30:41.249 "nvme_version": "1.3" 00:30:41.249 }, 00:30:41.249 "ns_data": { 00:30:41.249 "id": 1, 00:30:41.249 "can_share": true 00:30:41.249 } 00:30:41.250 } 00:30:41.250 ], 00:30:41.250 "mp_policy": "active_passive" 00:30:41.250 } 00:30:41.250 } 00:30:41.250 ] 00:30:41.250 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.250 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:41.250 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.250 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:41.250 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.250 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.NaOxHRq7YW 00:30:41.250 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:30:41.250 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:30:41.250 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:41.250 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:30:41.250 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:41.250 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:30:41.250 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:41.250 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:41.250 rmmod nvme_tcp 00:30:41.250 rmmod nvme_fabrics 00:30:41.250 rmmod nvme_keyring 00:30:41.250 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:41.250 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:30:41.250 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:30:41.250 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2815360 ']' 00:30:41.250 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2815360 00:30:41.250 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2815360 ']' 00:30:41.250 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2815360 00:30:41.250 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:30:41.250 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:41.250 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2815360 00:30:41.250 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:41.250 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:41.250 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2815360' 00:30:41.250 killing process with pid 2815360 00:30:41.250 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2815360 00:30:41.250 03:41:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2815360 00:30:42.629 03:41:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:42.629 03:41:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:42.629 03:41:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:42.629 03:41:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:30:42.629 03:41:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:30:42.629 03:41:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:42.629 03:41:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:30:42.629 03:41:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:42.629 03:41:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:42.629 03:41:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:42.629 03:41:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:42.629 03:41:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:44.542 03:41:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:44.542 00:30:44.542 real 0m10.268s 00:30:44.542 user 0m4.331s 00:30:44.542 sys 0m4.361s 00:30:44.542 03:41:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:44.542 03:41:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:44.542 ************************************ 00:30:44.542 END TEST nvmf_async_init 00:30:44.542 ************************************ 00:30:44.542 03:41:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:44.542 03:41:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:44.542 03:41:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:44.542 03:41:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.542 ************************************ 00:30:44.542 START TEST dma 00:30:44.542 ************************************ 00:30:44.542 03:41:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:44.542 * Looking for test storage... 00:30:44.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:44.802 03:41:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:44.802 03:41:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:30:44.802 03:41:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:44.802 03:41:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:44.802 03:41:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:44.802 03:41:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:44.802 03:41:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:44.802 03:41:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:30:44.802 03:41:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:30:44.802 03:41:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:30:44.802 03:41:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:30:44.802 03:41:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:30:44.802 03:41:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:30:44.802 03:41:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:30:44.802 03:41:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:44.802 03:41:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:30:44.802 03:41:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:30:44.802 03:41:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:44.802 03:41:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:44.802 03:41:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:30:44.802 03:41:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:30:44.802 03:41:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:44.802 03:41:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:30:44.802 03:41:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:30:44.802 03:41:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:30:44.802 03:41:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:30:44.802 03:41:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:44.802 03:41:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:30:44.802 03:41:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:30:44.802 03:41:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:44.802 03:41:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:44.802 03:41:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:30:44.802 03:41:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:44.802 03:41:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:44.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.802 --rc genhtml_branch_coverage=1 00:30:44.802 --rc genhtml_function_coverage=1 00:30:44.802 --rc genhtml_legend=1 00:30:44.802 --rc geninfo_all_blocks=1 00:30:44.802 --rc geninfo_unexecuted_blocks=1 00:30:44.802 00:30:44.802 ' 00:30:44.802 03:41:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:44.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.802 --rc genhtml_branch_coverage=1 00:30:44.802 --rc genhtml_function_coverage=1 00:30:44.802 --rc genhtml_legend=1 00:30:44.802 --rc geninfo_all_blocks=1 00:30:44.802 --rc geninfo_unexecuted_blocks=1 00:30:44.802 00:30:44.802 ' 00:30:44.802 03:41:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:44.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.802 --rc genhtml_branch_coverage=1 00:30:44.802 --rc genhtml_function_coverage=1 00:30:44.802 --rc genhtml_legend=1 00:30:44.803 --rc geninfo_all_blocks=1 00:30:44.803 --rc geninfo_unexecuted_blocks=1 00:30:44.803 00:30:44.803 ' 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:44.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.803 --rc genhtml_branch_coverage=1 00:30:44.803 --rc genhtml_function_coverage=1 00:30:44.803 --rc genhtml_legend=1 00:30:44.803 --rc geninfo_all_blocks=1 00:30:44.803 --rc geninfo_unexecuted_blocks=1 00:30:44.803 00:30:44.803 ' 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:44.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:30:44.803 00:30:44.803 real 0m0.206s 00:30:44.803 user 0m0.131s 00:30:44.803 sys 0m0.088s 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:44.803 ************************************ 00:30:44.803 END TEST dma 00:30:44.803 ************************************ 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.803 ************************************ 00:30:44.803 START TEST nvmf_identify 00:30:44.803 ************************************ 00:30:44.803 03:41:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:45.063 * Looking for test storage... 00:30:45.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:45.063 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:45.063 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:30:45.063 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:45.063 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:45.063 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:45.063 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:45.063 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:45.063 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:30:45.063 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:30:45.063 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:30:45.063 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:30:45.063 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:30:45.063 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:30:45.063 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:30:45.063 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:45.063 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:30:45.063 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:30:45.063 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:45.063 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:45.063 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:30:45.063 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:30:45.063 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:45.063 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:30:45.063 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:30:45.063 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:30:45.063 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:30:45.063 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:45.063 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:30:45.063 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:30:45.063 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:45.063 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:45.063 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:30:45.063 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:45.063 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:45.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.063 --rc genhtml_branch_coverage=1 00:30:45.063 --rc genhtml_function_coverage=1 00:30:45.063 --rc genhtml_legend=1 00:30:45.063 --rc geninfo_all_blocks=1 00:30:45.063 --rc geninfo_unexecuted_blocks=1 00:30:45.063 00:30:45.063 ' 00:30:45.063 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:45.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.063 --rc genhtml_branch_coverage=1 00:30:45.063 --rc genhtml_function_coverage=1 00:30:45.063 --rc genhtml_legend=1 00:30:45.063 --rc geninfo_all_blocks=1 00:30:45.063 --rc geninfo_unexecuted_blocks=1 00:30:45.063 00:30:45.063 ' 00:30:45.063 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:45.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.063 --rc genhtml_branch_coverage=1 00:30:45.063 --rc genhtml_function_coverage=1 00:30:45.063 --rc genhtml_legend=1 00:30:45.063 --rc geninfo_all_blocks=1 00:30:45.063 --rc geninfo_unexecuted_blocks=1 00:30:45.063 00:30:45.063 ' 00:30:45.063 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:45.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.063 --rc genhtml_branch_coverage=1 00:30:45.063 --rc genhtml_function_coverage=1 00:30:45.063 --rc genhtml_legend=1 00:30:45.063 --rc geninfo_all_blocks=1 00:30:45.063 --rc geninfo_unexecuted_blocks=1 00:30:45.063 00:30:45.063 ' 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:45.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:30:45.064 03:41:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:50.337 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:50.338 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:50.338 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:50.338 Found net devices under 0000:af:00.0: cvl_0_0 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:50.338 Found net devices under 0000:af:00.1: cvl_0_1 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:50.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:50.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:30:50.338 00:30:50.338 --- 10.0.0.2 ping statistics --- 00:30:50.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.338 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:50.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:50.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:30:50.338 00:30:50.338 --- 10.0.0.1 ping statistics --- 00:30:50.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.338 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:50.338 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:50.597 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:30:50.597 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:50.597 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:50.597 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2819278 00:30:50.597 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:50.597 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:50.597 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2819278 00:30:50.597 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2819278 ']' 00:30:50.597 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:50.597 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:50.597 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:50.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:50.597 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:50.597 03:41:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:50.597 [2024-12-13 03:41:51.640777] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:50.597 [2024-12-13 03:41:51.640890] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:50.597 [2024-12-13 03:41:51.757457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:50.855 [2024-12-13 03:41:51.863058] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:50.855 [2024-12-13 03:41:51.863103] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:50.855 [2024-12-13 03:41:51.863113] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:50.855 [2024-12-13 03:41:51.863123] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:50.855 [2024-12-13 03:41:51.863131] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:50.855 [2024-12-13 03:41:51.865604] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:50.855 [2024-12-13 03:41:51.865677] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:30:50.855 [2024-12-13 03:41:51.865742] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.855 [2024-12-13 03:41:51.865751] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:30:51.422 03:41:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:51.422 03:41:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:30:51.422 03:41:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:51.422 03:41:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.422 03:41:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:51.422 [2024-12-13 03:41:52.460125] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:51.422 03:41:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.422 03:41:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:30:51.422 03:41:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:51.422 03:41:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:51.422 03:41:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:51.422 03:41:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.422 03:41:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:51.422 Malloc0 00:30:51.422 03:41:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.422 03:41:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:51.422 03:41:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.422 03:41:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:51.422 03:41:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.422 03:41:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:30:51.422 03:41:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.422 03:41:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:51.422 03:41:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.423 03:41:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:51.423 03:41:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.423 03:41:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:51.423 [2024-12-13 03:41:52.614787] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:51.423 03:41:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.423 03:41:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:51.423 03:41:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.423 03:41:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:51.684 03:41:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.684 03:41:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:30:51.684 03:41:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.684 03:41:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:51.684 [ 00:30:51.684 { 00:30:51.684 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:51.684 "subtype": "Discovery", 00:30:51.684 "listen_addresses": [ 00:30:51.684 { 00:30:51.684 "trtype": "TCP", 00:30:51.684 "adrfam": "IPv4", 00:30:51.684 "traddr": "10.0.0.2", 00:30:51.684 "trsvcid": "4420" 00:30:51.684 } 00:30:51.684 ], 00:30:51.684 "allow_any_host": true, 00:30:51.684 "hosts": [] 00:30:51.684 }, 00:30:51.684 { 00:30:51.684 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:51.684 "subtype": "NVMe", 00:30:51.684 "listen_addresses": [ 00:30:51.684 { 00:30:51.684 "trtype": "TCP", 00:30:51.684 "adrfam": "IPv4", 00:30:51.684 "traddr": "10.0.0.2", 00:30:51.684 "trsvcid": "4420" 00:30:51.684 } 00:30:51.684 ], 00:30:51.684 "allow_any_host": true, 00:30:51.684 "hosts": [], 00:30:51.684 "serial_number": "SPDK00000000000001", 00:30:51.684 "model_number": "SPDK bdev Controller", 00:30:51.684 "max_namespaces": 32, 00:30:51.684 "min_cntlid": 1, 00:30:51.684 "max_cntlid": 65519, 00:30:51.684 "namespaces": [ 00:30:51.684 { 00:30:51.684 "nsid": 1, 00:30:51.684 "bdev_name": "Malloc0", 00:30:51.684 "name": "Malloc0", 00:30:51.684 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:30:51.684 "eui64": "ABCDEF0123456789", 00:30:51.684 "uuid": "06a7d9a1-e90c-4e4e-8dee-34c84f86347b" 00:30:51.684 } 00:30:51.684 ] 00:30:51.684 } 00:30:51.684 ] 00:30:51.684 03:41:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.684 03:41:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:30:51.684 [2024-12-13 03:41:52.688250] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:51.684 [2024-12-13 03:41:52.688316] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2819373 ] 00:30:51.684 [2024-12-13 03:41:52.752577] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:30:51.684 [2024-12-13 03:41:52.752673] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:51.684 [2024-12-13 03:41:52.752683] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:51.684 [2024-12-13 03:41:52.752705] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:51.684 [2024-12-13 03:41:52.752718] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:51.684 [2024-12-13 03:41:52.753270] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:30:51.684 [2024-12-13 03:41:52.753314] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500001db80 0 00:30:51.684 [2024-12-13 03:41:52.759931] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:51.684 [2024-12-13 03:41:52.759961] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:51.684 [2024-12-13 03:41:52.759972] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:51.684 [2024-12-13 03:41:52.759979] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:51.684 [2024-12-13 03:41:52.760032] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.684 [2024-12-13 03:41:52.760042] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.684 [2024-12-13 03:41:52.760050] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:30:51.684 [2024-12-13 03:41:52.760071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:51.684 [2024-12-13 03:41:52.760097] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:51.684 [2024-12-13 03:41:52.767934] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.684 [2024-12-13 03:41:52.767956] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.684 [2024-12-13 03:41:52.767963] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.684 [2024-12-13 03:41:52.767971] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:30:51.684 [2024-12-13 03:41:52.767987] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:51.684 [2024-12-13 03:41:52.768005] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:30:51.684 [2024-12-13 03:41:52.768016] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:30:51.684 [2024-12-13 03:41:52.768033] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.684 [2024-12-13 03:41:52.768040] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.684 [2024-12-13 03:41:52.768047] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:30:51.684 [2024-12-13 03:41:52.768062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.684 [2024-12-13 03:41:52.768083] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:51.684 [2024-12-13 03:41:52.768210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.684 [2024-12-13 03:41:52.768220] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.684 [2024-12-13 03:41:52.768225] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.684 [2024-12-13 03:41:52.768236] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:30:51.684 [2024-12-13 03:41:52.768249] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:30:51.684 [2024-12-13 03:41:52.768262] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:30:51.684 [2024-12-13 03:41:52.768273] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.684 [2024-12-13 03:41:52.768279] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.684 [2024-12-13 03:41:52.768285] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:30:51.684 [2024-12-13 03:41:52.768298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.684 [2024-12-13 03:41:52.768317] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:51.684 [2024-12-13 03:41:52.768391] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.684 [2024-12-13 03:41:52.768404] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.684 [2024-12-13 03:41:52.768409] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.684 [2024-12-13 03:41:52.768414] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:30:51.684 [2024-12-13 03:41:52.768422] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:30:51.684 [2024-12-13 03:41:52.768434] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:51.684 [2024-12-13 03:41:52.768446] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.684 [2024-12-13 03:41:52.768453] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.684 [2024-12-13 03:41:52.768458] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:30:51.684 [2024-12-13 03:41:52.768469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.684 [2024-12-13 03:41:52.768488] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:51.684 [2024-12-13 03:41:52.768569] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.684 [2024-12-13 03:41:52.768577] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.684 [2024-12-13 03:41:52.768583] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.684 [2024-12-13 03:41:52.768589] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:30:51.684 [2024-12-13 03:41:52.768596] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:51.684 [2024-12-13 03:41:52.768610] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.684 [2024-12-13 03:41:52.768616] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.684 [2024-12-13 03:41:52.768622] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:30:51.684 [2024-12-13 03:41:52.768632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.684 [2024-12-13 03:41:52.768653] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:51.684 [2024-12-13 03:41:52.768727] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.684 [2024-12-13 03:41:52.768736] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.684 [2024-12-13 03:41:52.768741] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.684 [2024-12-13 03:41:52.768746] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:30:51.684 [2024-12-13 03:41:52.768753] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:51.684 [2024-12-13 03:41:52.768761] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:51.684 [2024-12-13 03:41:52.768771] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:51.684 [2024-12-13 03:41:52.768879] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:30:51.685 [2024-12-13 03:41:52.768886] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:51.685 [2024-12-13 03:41:52.768906] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.685 [2024-12-13 03:41:52.768913] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.685 [2024-12-13 03:41:52.768925] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:30:51.685 [2024-12-13 03:41:52.768936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.685 [2024-12-13 03:41:52.768953] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:51.685 [2024-12-13 03:41:52.769035] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.685 [2024-12-13 03:41:52.769044] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.685 [2024-12-13 03:41:52.769049] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.685 [2024-12-13 03:41:52.769055] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:30:51.685 [2024-12-13 03:41:52.769062] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:51.685 [2024-12-13 03:41:52.769075] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.685 [2024-12-13 03:41:52.769081] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.685 [2024-12-13 03:41:52.769087] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:30:51.685 [2024-12-13 03:41:52.769097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.685 [2024-12-13 03:41:52.769111] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:51.685 [2024-12-13 03:41:52.769197] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.685 [2024-12-13 03:41:52.769206] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.685 [2024-12-13 03:41:52.769211] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.685 [2024-12-13 03:41:52.769216] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:30:51.685 [2024-12-13 03:41:52.769225] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:51.685 [2024-12-13 03:41:52.769233] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:51.685 [2024-12-13 03:41:52.769244] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:30:51.685 [2024-12-13 03:41:52.769262] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:51.685 [2024-12-13 03:41:52.769279] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.685 [2024-12-13 03:41:52.769286] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:30:51.685 [2024-12-13 03:41:52.769297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.685 [2024-12-13 03:41:52.769311] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:51.685 [2024-12-13 03:41:52.769425] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:51.685 [2024-12-13 03:41:52.769434] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:51.685 [2024-12-13 03:41:52.769439] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:51.685 [2024-12-13 03:41:52.769446] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=0 00:30:51.685 [2024-12-13 03:41:52.769453] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:30:51.685 [2024-12-13 03:41:52.769459] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.685 [2024-12-13 03:41:52.769482] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:51.685 [2024-12-13 03:41:52.769490] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:51.685 [2024-12-13 03:41:52.810936] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.685 [2024-12-13 03:41:52.810956] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.685 [2024-12-13 03:41:52.810962] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.685 [2024-12-13 03:41:52.810969] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:30:51.685 [2024-12-13 03:41:52.810985] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:30:51.685 [2024-12-13 03:41:52.810993] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:30:51.685 [2024-12-13 03:41:52.811001] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:30:51.685 [2024-12-13 03:41:52.811012] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:30:51.685 [2024-12-13 03:41:52.811020] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:30:51.685 [2024-12-13 03:41:52.811028] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:30:51.685 [2024-12-13 03:41:52.811044] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:51.685 [2024-12-13 03:41:52.811055] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.685 [2024-12-13 03:41:52.811062] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.685 [2024-12-13 03:41:52.811071] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:30:51.685 [2024-12-13 03:41:52.811087] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:51.685 [2024-12-13 03:41:52.811107] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:51.685 [2024-12-13 03:41:52.811202] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.685 [2024-12-13 03:41:52.811210] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.685 [2024-12-13 03:41:52.811214] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.685 [2024-12-13 03:41:52.811222] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:30:51.685 [2024-12-13 03:41:52.811232] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.685 [2024-12-13 03:41:52.811239] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.685 [2024-12-13 03:41:52.811245] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:30:51.685 [2024-12-13 03:41:52.811255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:51.685 [2024-12-13 03:41:52.811264] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.685 [2024-12-13 03:41:52.811269] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.685 [2024-12-13 03:41:52.811273] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500001db80) 00:30:51.685 [2024-12-13 03:41:52.811282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:51.685 [2024-12-13 03:41:52.811289] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.685 [2024-12-13 03:41:52.811294] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.685 [2024-12-13 03:41:52.811299] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500001db80) 00:30:51.685 [2024-12-13 03:41:52.811307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:51.685 [2024-12-13 03:41:52.811314] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.685 [2024-12-13 03:41:52.811319] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.685 [2024-12-13 03:41:52.811323] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:30:51.685 [2024-12-13 03:41:52.811331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:51.685 [2024-12-13 03:41:52.811338] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:51.685 [2024-12-13 03:41:52.811352] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:51.685 [2024-12-13 03:41:52.811361] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.685 [2024-12-13 03:41:52.811371] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:30:51.685 [2024-12-13 03:41:52.811381] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.685 [2024-12-13 03:41:52.811397] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:51.685 [2024-12-13 03:41:52.811405] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:30:51.685 [2024-12-13 03:41:52.811410] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:30:51.685 [2024-12-13 03:41:52.811416] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:51.685 [2024-12-13 03:41:52.811422] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:51.685 [2024-12-13 03:41:52.811539] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.685 [2024-12-13 03:41:52.811548] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.685 [2024-12-13 03:41:52.811553] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.685 [2024-12-13 03:41:52.811559] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:30:51.685 [2024-12-13 03:41:52.811566] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:30:51.685 [2024-12-13 03:41:52.811574] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:30:51.685 [2024-12-13 03:41:52.811599] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.686 [2024-12-13 03:41:52.811606] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:30:51.686 [2024-12-13 03:41:52.811617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.686 [2024-12-13 03:41:52.811631] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:51.686 [2024-12-13 03:41:52.811721] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:51.686 [2024-12-13 03:41:52.811730] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:51.686 [2024-12-13 03:41:52.811739] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:51.686 [2024-12-13 03:41:52.811745] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=4 00:30:51.686 [2024-12-13 03:41:52.811752] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:30:51.686 [2024-12-13 03:41:52.811759] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.686 [2024-12-13 03:41:52.811777] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:51.686 [2024-12-13 03:41:52.811784] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:51.686 [2024-12-13 03:41:52.811801] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.686 [2024-12-13 03:41:52.811810] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.686 [2024-12-13 03:41:52.811814] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.686 [2024-12-13 03:41:52.811821] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:30:51.686 [2024-12-13 03:41:52.811841] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:30:51.686 [2024-12-13 03:41:52.811881] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.686 [2024-12-13 03:41:52.811888] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:30:51.686 [2024-12-13 03:41:52.811899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.686 [2024-12-13 03:41:52.811908] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.686 [2024-12-13 03:41:52.811914] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.686 [2024-12-13 03:41:52.811938] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:30:51.686 [2024-12-13 03:41:52.811948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:51.686 [2024-12-13 03:41:52.811967] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:51.686 [2024-12-13 03:41:52.811975] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:51.686 [2024-12-13 03:41:52.812135] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:51.686 [2024-12-13 03:41:52.812147] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:51.686 [2024-12-13 03:41:52.812152] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:51.686 [2024-12-13 03:41:52.812161] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=1024, cccid=4 00:30:51.686 [2024-12-13 03:41:52.812167] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=1024 00:30:51.686 [2024-12-13 03:41:52.812173] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.686 [2024-12-13 03:41:52.812183] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:51.686 [2024-12-13 03:41:52.812188] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:51.686 [2024-12-13 03:41:52.812199] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.686 [2024-12-13 03:41:52.812206] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.686 [2024-12-13 03:41:52.812211] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.686 [2024-12-13 03:41:52.812217] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:30:51.686 [2024-12-13 03:41:52.854008] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.686 [2024-12-13 03:41:52.854028] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.686 [2024-12-13 03:41:52.854033] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.686 [2024-12-13 03:41:52.854048] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:30:51.686 [2024-12-13 03:41:52.854072] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.686 [2024-12-13 03:41:52.854080] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:30:51.686 [2024-12-13 03:41:52.854092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.686 [2024-12-13 03:41:52.854118] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:51.686 [2024-12-13 03:41:52.854248] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:51.686 [2024-12-13 03:41:52.854257] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:51.686 [2024-12-13 03:41:52.854262] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:51.686 [2024-12-13 03:41:52.854267] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=3072, cccid=4 00:30:51.686 [2024-12-13 03:41:52.854273] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=3072 00:30:51.686 [2024-12-13 03:41:52.854279] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.686 [2024-12-13 03:41:52.854288] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:51.686 [2024-12-13 03:41:52.854293] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:51.950 [2024-12-13 03:41:52.898935] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.950 [2024-12-13 03:41:52.898957] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.950 [2024-12-13 03:41:52.898963] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.950 [2024-12-13 03:41:52.898969] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:30:51.950 [2024-12-13 03:41:52.898988] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.950 [2024-12-13 03:41:52.898995] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:30:51.950 [2024-12-13 03:41:52.899008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.950 [2024-12-13 03:41:52.899033] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:51.950 [2024-12-13 03:41:52.899153] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:51.950 [2024-12-13 03:41:52.899166] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:51.950 [2024-12-13 03:41:52.899171] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:51.950 [2024-12-13 03:41:52.899177] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=8, cccid=4 00:30:51.950 [2024-12-13 03:41:52.899183] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=8 00:30:51.950 [2024-12-13 03:41:52.899188] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.950 [2024-12-13 03:41:52.899197] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:51.950 [2024-12-13 03:41:52.899202] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:51.950 [2024-12-13 03:41:52.941009] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.950 [2024-12-13 03:41:52.941028] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.950 [2024-12-13 03:41:52.941033] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.950 [2024-12-13 03:41:52.941039] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:30:51.950 ===================================================== 00:30:51.950 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:51.950 ===================================================== 00:30:51.950 Controller Capabilities/Features 00:30:51.950 ================================ 00:30:51.950 Vendor ID: 0000 00:30:51.950 Subsystem Vendor ID: 0000 00:30:51.950 Serial Number: .................... 00:30:51.950 Model Number: ........................................ 00:30:51.950 Firmware Version: 25.01 00:30:51.950 Recommended Arb Burst: 0 00:30:51.950 IEEE OUI Identifier: 00 00 00 00:30:51.950 Multi-path I/O 00:30:51.950 May have multiple subsystem ports: No 00:30:51.950 May have multiple controllers: No 00:30:51.950 Associated with SR-IOV VF: No 00:30:51.950 Max Data Transfer Size: 131072 00:30:51.951 Max Number of Namespaces: 0 00:30:51.951 Max Number of I/O Queues: 1024 00:30:51.951 NVMe Specification Version (VS): 1.3 00:30:51.951 NVMe Specification Version (Identify): 1.3 00:30:51.951 Maximum Queue Entries: 128 00:30:51.951 Contiguous Queues Required: Yes 00:30:51.951 Arbitration Mechanisms Supported 00:30:51.951 Weighted Round Robin: Not Supported 00:30:51.951 Vendor Specific: Not Supported 00:30:51.951 Reset Timeout: 15000 ms 00:30:51.951 Doorbell Stride: 4 bytes 00:30:51.951 NVM Subsystem Reset: Not Supported 00:30:51.951 Command Sets Supported 00:30:51.951 NVM Command Set: Supported 00:30:51.951 Boot Partition: Not Supported 00:30:51.951 Memory Page Size Minimum: 4096 bytes 00:30:51.951 Memory Page Size Maximum: 4096 bytes 00:30:51.951 Persistent Memory Region: Not Supported 00:30:51.951 Optional Asynchronous Events Supported 00:30:51.951 Namespace Attribute Notices: Not Supported 00:30:51.951 Firmware Activation Notices: Not Supported 00:30:51.951 ANA Change Notices: Not Supported 00:30:51.951 PLE Aggregate Log Change Notices: Not Supported 00:30:51.951 LBA Status Info Alert Notices: Not Supported 00:30:51.951 EGE Aggregate Log Change Notices: Not Supported 00:30:51.951 Normal NVM Subsystem Shutdown event: Not Supported 00:30:51.951 Zone Descriptor Change Notices: Not Supported 00:30:51.951 Discovery Log Change Notices: Supported 00:30:51.951 Controller Attributes 00:30:51.951 128-bit Host Identifier: Not Supported 00:30:51.951 Non-Operational Permissive Mode: Not Supported 00:30:51.951 NVM Sets: Not Supported 00:30:51.951 Read Recovery Levels: Not Supported 00:30:51.951 Endurance Groups: Not Supported 00:30:51.951 Predictable Latency Mode: Not Supported 00:30:51.951 Traffic Based Keep ALive: Not Supported 00:30:51.951 Namespace Granularity: Not Supported 00:30:51.951 SQ Associations: Not Supported 00:30:51.951 UUID List: Not Supported 00:30:51.951 Multi-Domain Subsystem: Not Supported 00:30:51.951 Fixed Capacity Management: Not Supported 00:30:51.951 Variable Capacity Management: Not Supported 00:30:51.951 Delete Endurance Group: Not Supported 00:30:51.951 Delete NVM Set: Not Supported 00:30:51.951 Extended LBA Formats Supported: Not Supported 00:30:51.951 Flexible Data Placement Supported: Not Supported 00:30:51.951 00:30:51.951 Controller Memory Buffer Support 00:30:51.951 ================================ 00:30:51.951 Supported: No 00:30:51.951 00:30:51.951 Persistent Memory Region Support 00:30:51.951 ================================ 00:30:51.951 Supported: No 00:30:51.951 00:30:51.951 Admin Command Set Attributes 00:30:51.951 ============================ 00:30:51.951 Security Send/Receive: Not Supported 00:30:51.951 Format NVM: Not Supported 00:30:51.951 Firmware Activate/Download: Not Supported 00:30:51.951 Namespace Management: Not Supported 00:30:51.951 Device Self-Test: Not Supported 00:30:51.951 Directives: Not Supported 00:30:51.951 NVMe-MI: Not Supported 00:30:51.951 Virtualization Management: Not Supported 00:30:51.951 Doorbell Buffer Config: Not Supported 00:30:51.951 Get LBA Status Capability: Not Supported 00:30:51.951 Command & Feature Lockdown Capability: Not Supported 00:30:51.951 Abort Command Limit: 1 00:30:51.951 Async Event Request Limit: 4 00:30:51.951 Number of Firmware Slots: N/A 00:30:51.951 Firmware Slot 1 Read-Only: N/A 00:30:51.951 Firmware Activation Without Reset: N/A 00:30:51.951 Multiple Update Detection Support: N/A 00:30:51.951 Firmware Update Granularity: No Information Provided 00:30:51.951 Per-Namespace SMART Log: No 00:30:51.951 Asymmetric Namespace Access Log Page: Not Supported 00:30:51.951 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:51.951 Command Effects Log Page: Not Supported 00:30:51.951 Get Log Page Extended Data: Supported 00:30:51.951 Telemetry Log Pages: Not Supported 00:30:51.951 Persistent Event Log Pages: Not Supported 00:30:51.951 Supported Log Pages Log Page: May Support 00:30:51.951 Commands Supported & Effects Log Page: Not Supported 00:30:51.951 Feature Identifiers & Effects Log Page:May Support 00:30:51.951 NVMe-MI Commands & Effects Log Page: May Support 00:30:51.951 Data Area 4 for Telemetry Log: Not Supported 00:30:51.951 Error Log Page Entries Supported: 128 00:30:51.951 Keep Alive: Not Supported 00:30:51.951 00:30:51.951 NVM Command Set Attributes 00:30:51.951 ========================== 00:30:51.951 Submission Queue Entry Size 00:30:51.951 Max: 1 00:30:51.951 Min: 1 00:30:51.951 Completion Queue Entry Size 00:30:51.951 Max: 1 00:30:51.951 Min: 1 00:30:51.951 Number of Namespaces: 0 00:30:51.951 Compare Command: Not Supported 00:30:51.951 Write Uncorrectable Command: Not Supported 00:30:51.951 Dataset Management Command: Not Supported 00:30:51.951 Write Zeroes Command: Not Supported 00:30:51.951 Set Features Save Field: Not Supported 00:30:51.951 Reservations: Not Supported 00:30:51.951 Timestamp: Not Supported 00:30:51.951 Copy: Not Supported 00:30:51.951 Volatile Write Cache: Not Present 00:30:51.951 Atomic Write Unit (Normal): 1 00:30:51.951 Atomic Write Unit (PFail): 1 00:30:51.951 Atomic Compare & Write Unit: 1 00:30:51.951 Fused Compare & Write: Supported 00:30:51.951 Scatter-Gather List 00:30:51.951 SGL Command Set: Supported 00:30:51.951 SGL Keyed: Supported 00:30:51.951 SGL Bit Bucket Descriptor: Not Supported 00:30:51.951 SGL Metadata Pointer: Not Supported 00:30:51.951 Oversized SGL: Not Supported 00:30:51.951 SGL Metadata Address: Not Supported 00:30:51.951 SGL Offset: Supported 00:30:51.951 Transport SGL Data Block: Not Supported 00:30:51.951 Replay Protected Memory Block: Not Supported 00:30:51.951 00:30:51.951 Firmware Slot Information 00:30:51.951 ========================= 00:30:51.951 Active slot: 0 00:30:51.951 00:30:51.951 00:30:51.951 Error Log 00:30:51.951 ========= 00:30:51.951 00:30:51.951 Active Namespaces 00:30:51.951 ================= 00:30:51.951 Discovery Log Page 00:30:51.951 ================== 00:30:51.951 Generation Counter: 2 00:30:51.951 Number of Records: 2 00:30:51.951 Record Format: 0 00:30:51.951 00:30:51.951 Discovery Log Entry 0 00:30:51.951 ---------------------- 00:30:51.951 Transport Type: 3 (TCP) 00:30:51.951 Address Family: 1 (IPv4) 00:30:51.951 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:51.951 Entry Flags: 00:30:51.951 Duplicate Returned Information: 1 00:30:51.951 Explicit Persistent Connection Support for Discovery: 1 00:30:51.951 Transport Requirements: 00:30:51.951 Secure Channel: Not Required 00:30:51.951 Port ID: 0 (0x0000) 00:30:51.951 Controller ID: 65535 (0xffff) 00:30:51.951 Admin Max SQ Size: 128 00:30:51.951 Transport Service Identifier: 4420 00:30:51.951 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:51.951 Transport Address: 10.0.0.2 00:30:51.951 Discovery Log Entry 1 00:30:51.951 ---------------------- 00:30:51.951 Transport Type: 3 (TCP) 00:30:51.951 Address Family: 1 (IPv4) 00:30:51.951 Subsystem Type: 2 (NVM Subsystem) 00:30:51.951 Entry Flags: 00:30:51.951 Duplicate Returned Information: 0 00:30:51.951 Explicit Persistent Connection Support for Discovery: 0 00:30:51.951 Transport Requirements: 00:30:51.951 Secure Channel: Not Required 00:30:51.951 Port ID: 0 (0x0000) 00:30:51.951 Controller ID: 65535 (0xffff) 00:30:51.951 Admin Max SQ Size: 128 00:30:51.951 Transport Service Identifier: 4420 00:30:51.951 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:30:51.951 Transport Address: 10.0.0.2 [2024-12-13 03:41:52.941165] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:30:51.951 [2024-12-13 03:41:52.941180] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:30:51.951 [2024-12-13 03:41:52.941190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.951 [2024-12-13 03:41:52.941198] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500001db80 00:30:51.951 [2024-12-13 03:41:52.941206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.951 [2024-12-13 03:41:52.941212] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500001db80 00:30:51.951 [2024-12-13 03:41:52.941219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.951 [2024-12-13 03:41:52.941226] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:30:51.951 [2024-12-13 03:41:52.941233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.951 [2024-12-13 03:41:52.941245] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.952 [2024-12-13 03:41:52.941251] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.952 [2024-12-13 03:41:52.941257] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:30:51.952 [2024-12-13 03:41:52.941274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.952 [2024-12-13 03:41:52.941293] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:51.952 [2024-12-13 03:41:52.941417] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.952 [2024-12-13 03:41:52.941426] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.952 [2024-12-13 03:41:52.941432] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.952 [2024-12-13 03:41:52.941438] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:30:51.952 [2024-12-13 03:41:52.941455] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.952 [2024-12-13 03:41:52.941463] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.952 [2024-12-13 03:41:52.941469] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:30:51.952 [2024-12-13 03:41:52.941482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.952 [2024-12-13 03:41:52.941503] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:51.952 [2024-12-13 03:41:52.941644] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.952 [2024-12-13 03:41:52.941652] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.952 [2024-12-13 03:41:52.941657] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.952 [2024-12-13 03:41:52.941663] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:30:51.952 [2024-12-13 03:41:52.941673] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:30:51.952 [2024-12-13 03:41:52.941680] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:30:51.952 [2024-12-13 03:41:52.941696] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.952 [2024-12-13 03:41:52.941703] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.952 [2024-12-13 03:41:52.941709] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:30:51.952 [2024-12-13 03:41:52.941719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.952 [2024-12-13 03:41:52.941733] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:51.952 [2024-12-13 03:41:52.941818] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.952 [2024-12-13 03:41:52.941826] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.952 [2024-12-13 03:41:52.941831] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.952 [2024-12-13 03:41:52.941836] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:30:51.952 [2024-12-13 03:41:52.941849] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.952 [2024-12-13 03:41:52.941855] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.952 [2024-12-13 03:41:52.941860] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:30:51.952 [2024-12-13 03:41:52.941870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.952 [2024-12-13 03:41:52.941883] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:51.952 [2024-12-13 03:41:52.941964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.952 [2024-12-13 03:41:52.941973] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.952 [2024-12-13 03:41:52.941978] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.952 [2024-12-13 03:41:52.941984] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:30:51.952 [2024-12-13 03:41:52.941996] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.952 [2024-12-13 03:41:52.942002] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.952 [2024-12-13 03:41:52.942007] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:30:51.952 [2024-12-13 03:41:52.942016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.952 [2024-12-13 03:41:52.942029] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:51.952 [2024-12-13 03:41:52.942119] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.952 [2024-12-13 03:41:52.942127] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.952 [2024-12-13 03:41:52.942132] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.952 [2024-12-13 03:41:52.942138] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:30:51.952 [2024-12-13 03:41:52.942149] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.952 [2024-12-13 03:41:52.942155] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.952 [2024-12-13 03:41:52.942160] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:30:51.952 [2024-12-13 03:41:52.942169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.952 [2024-12-13 03:41:52.942182] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:51.952 [2024-12-13 03:41:52.942276] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.952 [2024-12-13 03:41:52.942285] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.952 [2024-12-13 03:41:52.942290] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.952 [2024-12-13 03:41:52.942295] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:30:51.952 [2024-12-13 03:41:52.942307] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.952 [2024-12-13 03:41:52.942316] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.952 [2024-12-13 03:41:52.942321] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:30:51.952 [2024-12-13 03:41:52.942330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.952 [2024-12-13 03:41:52.942343] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:51.952 [2024-12-13 03:41:52.942421] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.952 [2024-12-13 03:41:52.942429] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.952 [2024-12-13 03:41:52.942434] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.952 [2024-12-13 03:41:52.942440] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:30:51.952 [2024-12-13 03:41:52.942452] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.952 [2024-12-13 03:41:52.942457] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.952 [2024-12-13 03:41:52.942462] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:30:51.952 [2024-12-13 03:41:52.942471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.952 [2024-12-13 03:41:52.942484] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:51.952 [2024-12-13 03:41:52.942558] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.952 [2024-12-13 03:41:52.942566] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.952 [2024-12-13 03:41:52.942571] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.952 [2024-12-13 03:41:52.942576] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:30:51.952 [2024-12-13 03:41:52.942589] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.952 [2024-12-13 03:41:52.942595] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.952 [2024-12-13 03:41:52.942600] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:30:51.952 [2024-12-13 03:41:52.942611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.952 [2024-12-13 03:41:52.942624] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:51.952 [2024-12-13 03:41:52.942737] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.952 [2024-12-13 03:41:52.942744] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.952 [2024-12-13 03:41:52.942749] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.952 [2024-12-13 03:41:52.942754] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:30:51.952 [2024-12-13 03:41:52.942768] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.952 [2024-12-13 03:41:52.942773] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.952 [2024-12-13 03:41:52.942784] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:30:51.952 [2024-12-13 03:41:52.942792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.952 [2024-12-13 03:41:52.942805] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:51.952 [2024-12-13 03:41:52.946932] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.952 [2024-12-13 03:41:52.946950] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.952 [2024-12-13 03:41:52.946955] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.952 [2024-12-13 03:41:52.946961] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:30:51.952 [2024-12-13 03:41:52.946979] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.952 [2024-12-13 03:41:52.946989] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.952 [2024-12-13 03:41:52.946994] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:30:51.952 [2024-12-13 03:41:52.947006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.952 [2024-12-13 03:41:52.947023] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:51.952 [2024-12-13 03:41:52.947145] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.952 [2024-12-13 03:41:52.947154] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.952 [2024-12-13 03:41:52.947159] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.952 [2024-12-13 03:41:52.947164] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:30:51.952 [2024-12-13 03:41:52.947175] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:30:51.952 00:30:51.952 03:41:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:30:51.952 [2024-12-13 03:41:53.044156] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:51.953 [2024-12-13 03:41:53.044231] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2819551 ] 00:30:51.953 [2024-12-13 03:41:53.105267] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:30:51.953 [2024-12-13 03:41:53.105361] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:51.953 [2024-12-13 03:41:53.105372] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:51.953 [2024-12-13 03:41:53.105394] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:51.953 [2024-12-13 03:41:53.105407] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:51.953 [2024-12-13 03:41:53.109215] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:30:51.953 [2024-12-13 03:41:53.109255] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500001db80 0 00:30:51.953 [2024-12-13 03:41:53.115930] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:51.953 [2024-12-13 03:41:53.115957] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:51.953 [2024-12-13 03:41:53.115967] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:51.953 [2024-12-13 03:41:53.115973] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:51.953 [2024-12-13 03:41:53.116018] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.953 [2024-12-13 03:41:53.116028] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.953 [2024-12-13 03:41:53.116037] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:30:51.953 [2024-12-13 03:41:53.116055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:51.953 [2024-12-13 03:41:53.116080] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:51.953 [2024-12-13 03:41:53.122934] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.953 [2024-12-13 03:41:53.122959] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.953 [2024-12-13 03:41:53.122968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.953 [2024-12-13 03:41:53.122976] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:30:51.953 [2024-12-13 03:41:53.122994] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:51.953 [2024-12-13 03:41:53.123008] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:30:51.953 [2024-12-13 03:41:53.123017] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:30:51.953 [2024-12-13 03:41:53.123034] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.953 [2024-12-13 03:41:53.123041] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.953 [2024-12-13 03:41:53.123047] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:30:51.953 [2024-12-13 03:41:53.123061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.953 [2024-12-13 03:41:53.123081] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:51.953 [2024-12-13 03:41:53.123279] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.953 [2024-12-13 03:41:53.123288] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.953 [2024-12-13 03:41:53.123294] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.953 [2024-12-13 03:41:53.123300] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:30:51.953 [2024-12-13 03:41:53.123314] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:30:51.953 [2024-12-13 03:41:53.123327] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:30:51.953 [2024-12-13 03:41:53.123336] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.953 [2024-12-13 03:41:53.123345] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.953 [2024-12-13 03:41:53.123351] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:30:51.953 [2024-12-13 03:41:53.123363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.953 [2024-12-13 03:41:53.123379] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:51.953 [2024-12-13 03:41:53.123456] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.953 [2024-12-13 03:41:53.123466] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.953 [2024-12-13 03:41:53.123471] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.953 [2024-12-13 03:41:53.123477] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:30:51.953 [2024-12-13 03:41:53.123484] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:30:51.953 [2024-12-13 03:41:53.123495] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:51.953 [2024-12-13 03:41:53.123505] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.953 [2024-12-13 03:41:53.123513] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.953 [2024-12-13 03:41:53.123519] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:30:51.953 [2024-12-13 03:41:53.123532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.953 [2024-12-13 03:41:53.123547] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:51.953 [2024-12-13 03:41:53.123632] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.953 [2024-12-13 03:41:53.123640] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.953 [2024-12-13 03:41:53.123647] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.953 [2024-12-13 03:41:53.123652] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:30:51.953 [2024-12-13 03:41:53.123660] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:51.953 [2024-12-13 03:41:53.123676] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.953 [2024-12-13 03:41:53.123683] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.953 [2024-12-13 03:41:53.123689] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:30:51.953 [2024-12-13 03:41:53.123700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.953 [2024-12-13 03:41:53.123713] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:51.953 [2024-12-13 03:41:53.123782] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.953 [2024-12-13 03:41:53.123791] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.953 [2024-12-13 03:41:53.123796] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.953 [2024-12-13 03:41:53.123802] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:30:51.953 [2024-12-13 03:41:53.123809] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:51.953 [2024-12-13 03:41:53.123818] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:51.953 [2024-12-13 03:41:53.123828] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:51.953 [2024-12-13 03:41:53.123936] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:30:51.953 [2024-12-13 03:41:53.123944] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:51.953 [2024-12-13 03:41:53.123960] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.953 [2024-12-13 03:41:53.123966] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.953 [2024-12-13 03:41:53.123972] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:30:51.953 [2024-12-13 03:41:53.123982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.953 [2024-12-13 03:41:53.123999] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:51.953 [2024-12-13 03:41:53.124088] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.953 [2024-12-13 03:41:53.124098] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.953 [2024-12-13 03:41:53.124103] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.953 [2024-12-13 03:41:53.124108] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:30:51.953 [2024-12-13 03:41:53.124116] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:51.953 [2024-12-13 03:41:53.124129] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.953 [2024-12-13 03:41:53.124136] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.953 [2024-12-13 03:41:53.124141] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:30:51.953 [2024-12-13 03:41:53.124151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.953 [2024-12-13 03:41:53.124165] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:51.953 [2024-12-13 03:41:53.124252] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.953 [2024-12-13 03:41:53.124263] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.953 [2024-12-13 03:41:53.124267] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.953 [2024-12-13 03:41:53.124273] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:30:51.953 [2024-12-13 03:41:53.124280] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:51.953 [2024-12-13 03:41:53.124287] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:51.953 [2024-12-13 03:41:53.124297] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:30:51.953 [2024-12-13 03:41:53.124312] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:51.953 [2024-12-13 03:41:53.124327] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.953 [2024-12-13 03:41:53.124334] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:30:51.953 [2024-12-13 03:41:53.124345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.953 [2024-12-13 03:41:53.124359] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:51.953 [2024-12-13 03:41:53.124479] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:51.953 [2024-12-13 03:41:53.124488] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:51.953 [2024-12-13 03:41:53.124493] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:51.954 [2024-12-13 03:41:53.124499] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=0 00:30:51.954 [2024-12-13 03:41:53.124506] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:30:51.954 [2024-12-13 03:41:53.124513] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.954 [2024-12-13 03:41:53.124527] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:51.954 [2024-12-13 03:41:53.124534] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:51.954 [2024-12-13 03:41:53.124545] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.954 [2024-12-13 03:41:53.124552] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.954 [2024-12-13 03:41:53.124557] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.954 [2024-12-13 03:41:53.124562] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:30:51.954 [2024-12-13 03:41:53.124576] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:30:51.954 [2024-12-13 03:41:53.124583] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:30:51.954 [2024-12-13 03:41:53.124592] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:30:51.954 [2024-12-13 03:41:53.124602] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:30:51.954 [2024-12-13 03:41:53.124608] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:30:51.954 [2024-12-13 03:41:53.124615] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:30:51.954 [2024-12-13 03:41:53.124629] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:51.954 [2024-12-13 03:41:53.124641] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.954 [2024-12-13 03:41:53.124647] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.954 [2024-12-13 03:41:53.124655] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:30:51.954 [2024-12-13 03:41:53.124666] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:51.954 [2024-12-13 03:41:53.124684] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:51.954 [2024-12-13 03:41:53.124772] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.954 [2024-12-13 03:41:53.124781] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.954 [2024-12-13 03:41:53.124786] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.954 [2024-12-13 03:41:53.124791] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:30:51.954 [2024-12-13 03:41:53.124803] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.954 [2024-12-13 03:41:53.124812] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.954 [2024-12-13 03:41:53.124817] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:30:51.954 [2024-12-13 03:41:53.124828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:51.954 [2024-12-13 03:41:53.124837] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.954 [2024-12-13 03:41:53.124842] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.954 [2024-12-13 03:41:53.124846] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500001db80) 00:30:51.954 [2024-12-13 03:41:53.124856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:51.954 [2024-12-13 03:41:53.124863] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.954 [2024-12-13 03:41:53.124868] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.954 [2024-12-13 03:41:53.124873] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500001db80) 00:30:51.954 [2024-12-13 03:41:53.124881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:51.954 [2024-12-13 03:41:53.124888] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.954 [2024-12-13 03:41:53.124893] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.954 [2024-12-13 03:41:53.124898] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:30:51.954 [2024-12-13 03:41:53.124906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:51.954 [2024-12-13 03:41:53.124912] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:51.954 [2024-12-13 03:41:53.124931] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:51.954 [2024-12-13 03:41:53.124942] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.954 [2024-12-13 03:41:53.124948] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:30:51.954 [2024-12-13 03:41:53.124959] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.954 [2024-12-13 03:41:53.124975] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:51.954 [2024-12-13 03:41:53.124982] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:30:51.954 [2024-12-13 03:41:53.124988] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:30:51.954 [2024-12-13 03:41:53.124993] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:51.954 [2024-12-13 03:41:53.124999] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:51.954 [2024-12-13 03:41:53.125115] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.954 [2024-12-13 03:41:53.125124] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.954 [2024-12-13 03:41:53.125129] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.954 [2024-12-13 03:41:53.125134] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:30:51.954 [2024-12-13 03:41:53.125142] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:30:51.954 [2024-12-13 03:41:53.125152] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:30:51.954 [2024-12-13 03:41:53.125166] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:30:51.954 [2024-12-13 03:41:53.125174] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:30:51.954 [2024-12-13 03:41:53.125183] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.954 [2024-12-13 03:41:53.125189] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.954 [2024-12-13 03:41:53.125194] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:30:51.954 [2024-12-13 03:41:53.125205] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:51.954 [2024-12-13 03:41:53.125220] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:51.954 [2024-12-13 03:41:53.125302] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.954 [2024-12-13 03:41:53.125311] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.954 [2024-12-13 03:41:53.125315] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.954 [2024-12-13 03:41:53.125320] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:30:51.954 [2024-12-13 03:41:53.125396] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:30:51.954 [2024-12-13 03:41:53.125418] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:30:51.954 [2024-12-13 03:41:53.125431] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.954 [2024-12-13 03:41:53.125439] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:30:51.954 [2024-12-13 03:41:53.125450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.954 [2024-12-13 03:41:53.125466] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:51.954 [2024-12-13 03:41:53.125577] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:51.954 [2024-12-13 03:41:53.125586] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:51.954 [2024-12-13 03:41:53.125591] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:51.954 [2024-12-13 03:41:53.125596] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=4 00:30:51.954 [2024-12-13 03:41:53.125602] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:30:51.954 [2024-12-13 03:41:53.125608] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.954 [2024-12-13 03:41:53.125617] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:51.954 [2024-12-13 03:41:53.125622] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:51.954 [2024-12-13 03:41:53.125633] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.954 [2024-12-13 03:41:53.125643] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.954 [2024-12-13 03:41:53.125650] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.954 [2024-12-13 03:41:53.125655] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:30:51.954 [2024-12-13 03:41:53.125676] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:30:51.954 [2024-12-13 03:41:53.125696] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:30:51.954 [2024-12-13 03:41:53.125710] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:30:51.954 [2024-12-13 03:41:53.125722] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.954 [2024-12-13 03:41:53.125728] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:30:51.954 [2024-12-13 03:41:53.125741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.954 [2024-12-13 03:41:53.125756] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:51.954 [2024-12-13 03:41:53.125878] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:51.954 [2024-12-13 03:41:53.125890] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:51.954 [2024-12-13 03:41:53.125896] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:51.954 [2024-12-13 03:41:53.125901] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=4 00:30:51.954 [2024-12-13 03:41:53.125908] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:30:51.954 [2024-12-13 03:41:53.125924] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.954 [2024-12-13 03:41:53.125933] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:51.954 [2024-12-13 03:41:53.125938] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:51.955 [2024-12-13 03:41:53.125948] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.955 [2024-12-13 03:41:53.125955] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.955 [2024-12-13 03:41:53.125960] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.955 [2024-12-13 03:41:53.125965] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:30:51.955 [2024-12-13 03:41:53.125983] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:30:51.955 [2024-12-13 03:41:53.126003] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:30:51.955 [2024-12-13 03:41:53.126016] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.955 [2024-12-13 03:41:53.126022] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:30:51.955 [2024-12-13 03:41:53.126032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.955 [2024-12-13 03:41:53.126048] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:51.955 [2024-12-13 03:41:53.126141] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:51.955 [2024-12-13 03:41:53.126150] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:51.955 [2024-12-13 03:41:53.126154] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:51.955 [2024-12-13 03:41:53.126159] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=4 00:30:51.955 [2024-12-13 03:41:53.126165] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:30:51.955 [2024-12-13 03:41:53.126175] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.955 [2024-12-13 03:41:53.126184] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:51.955 [2024-12-13 03:41:53.126189] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:51.955 [2024-12-13 03:41:53.126200] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.955 [2024-12-13 03:41:53.126208] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.955 [2024-12-13 03:41:53.126212] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.955 [2024-12-13 03:41:53.126217] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:30:51.955 [2024-12-13 03:41:53.126234] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:30:51.955 [2024-12-13 03:41:53.126246] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:30:51.955 [2024-12-13 03:41:53.126255] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:30:51.955 [2024-12-13 03:41:53.126263] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:30:51.955 [2024-12-13 03:41:53.126270] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:30:51.955 [2024-12-13 03:41:53.126278] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:30:51.955 [2024-12-13 03:41:53.126285] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:30:51.955 [2024-12-13 03:41:53.126294] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:30:51.955 [2024-12-13 03:41:53.126301] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:30:51.955 [2024-12-13 03:41:53.126330] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.955 [2024-12-13 03:41:53.126336] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:30:51.955 [2024-12-13 03:41:53.126347] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.955 [2024-12-13 03:41:53.126356] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.955 [2024-12-13 03:41:53.126362] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.955 [2024-12-13 03:41:53.126367] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:30:51.955 [2024-12-13 03:41:53.126377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:51.955 [2024-12-13 03:41:53.126396] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:51.955 [2024-12-13 03:41:53.126404] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:51.955 [2024-12-13 03:41:53.126525] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.955 [2024-12-13 03:41:53.126536] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.955 [2024-12-13 03:41:53.126542] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.955 [2024-12-13 03:41:53.126548] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:30:51.955 [2024-12-13 03:41:53.126557] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.955 [2024-12-13 03:41:53.126566] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.955 [2024-12-13 03:41:53.126571] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.955 [2024-12-13 03:41:53.126576] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:30:51.955 [2024-12-13 03:41:53.126591] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.955 [2024-12-13 03:41:53.126597] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:30:51.955 [2024-12-13 03:41:53.126606] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.955 [2024-12-13 03:41:53.126620] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:51.955 [2024-12-13 03:41:53.126733] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.955 [2024-12-13 03:41:53.126741] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.955 [2024-12-13 03:41:53.126746] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.955 [2024-12-13 03:41:53.126751] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:30:51.955 [2024-12-13 03:41:53.126763] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.955 [2024-12-13 03:41:53.126769] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:30:51.955 [2024-12-13 03:41:53.126780] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.955 [2024-12-13 03:41:53.126797] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:51.955 [2024-12-13 03:41:53.126867] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.955 [2024-12-13 03:41:53.126876] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.955 [2024-12-13 03:41:53.126881] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.955 [2024-12-13 03:41:53.126886] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:30:51.955 [2024-12-13 03:41:53.126898] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.955 [2024-12-13 03:41:53.126903] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:30:51.955 [2024-12-13 03:41:53.126913] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.955 [2024-12-13 03:41:53.130944] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:51.955 [2024-12-13 03:41:53.131127] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.955 [2024-12-13 03:41:53.131137] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.955 [2024-12-13 03:41:53.131142] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.955 [2024-12-13 03:41:53.131147] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:30:51.955 [2024-12-13 03:41:53.131171] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.955 [2024-12-13 03:41:53.131178] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:30:51.955 [2024-12-13 03:41:53.131190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.955 [2024-12-13 03:41:53.131201] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.955 [2024-12-13 03:41:53.131207] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:30:51.955 [2024-12-13 03:41:53.131216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.955 [2024-12-13 03:41:53.131225] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.955 [2024-12-13 03:41:53.131231] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x61500001db80) 00:30:51.955 [2024-12-13 03:41:53.131242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.955 [2024-12-13 03:41:53.131259] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.956 [2024-12-13 03:41:53.131265] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500001db80) 00:30:51.956 [2024-12-13 03:41:53.131274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.956 [2024-12-13 03:41:53.131293] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:51.956 [2024-12-13 03:41:53.131301] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:51.956 [2024-12-13 03:41:53.131306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:30:51.956 [2024-12-13 03:41:53.131312] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:30:51.956 [2024-12-13 03:41:53.131501] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:51.956 [2024-12-13 03:41:53.131509] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:51.956 [2024-12-13 03:41:53.131514] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:51.956 [2024-12-13 03:41:53.131520] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=8192, cccid=5 00:30:51.956 [2024-12-13 03:41:53.131527] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x61500001db80): expected_datao=0, payload_size=8192 00:30:51.956 [2024-12-13 03:41:53.131533] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.956 [2024-12-13 03:41:53.131552] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:51.956 [2024-12-13 03:41:53.131558] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:51.956 [2024-12-13 03:41:53.131571] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:51.956 [2024-12-13 03:41:53.131581] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:51.956 [2024-12-13 03:41:53.131586] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:51.956 [2024-12-13 03:41:53.131591] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=512, cccid=4 00:30:51.956 [2024-12-13 03:41:53.131596] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=512 00:30:51.956 [2024-12-13 03:41:53.131602] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.956 [2024-12-13 03:41:53.131609] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:51.956 [2024-12-13 03:41:53.131614] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:51.956 [2024-12-13 03:41:53.131621] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:51.956 [2024-12-13 03:41:53.131629] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:51.956 [2024-12-13 03:41:53.131634] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:51.956 [2024-12-13 03:41:53.131639] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=512, cccid=6 00:30:51.956 [2024-12-13 03:41:53.131645] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x61500001db80): expected_datao=0, payload_size=512 00:30:51.956 [2024-12-13 03:41:53.131650] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.956 [2024-12-13 03:41:53.131661] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:51.956 [2024-12-13 03:41:53.131666] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:51.956 [2024-12-13 03:41:53.131672] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:51.956 [2024-12-13 03:41:53.131679] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:51.956 [2024-12-13 03:41:53.131683] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:51.956 [2024-12-13 03:41:53.131688] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=7 00:30:51.956 [2024-12-13 03:41:53.131694] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:30:51.956 [2024-12-13 03:41:53.131702] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.956 [2024-12-13 03:41:53.131710] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:51.956 [2024-12-13 03:41:53.131715] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:51.956 [2024-12-13 03:41:53.131726] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.956 [2024-12-13 03:41:53.131733] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.956 [2024-12-13 03:41:53.131737] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.956 [2024-12-13 03:41:53.131743] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:30:51.956 [2024-12-13 03:41:53.131768] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.956 [2024-12-13 03:41:53.131780] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.956 [2024-12-13 03:41:53.131784] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.956 [2024-12-13 03:41:53.131791] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:30:51.956 [2024-12-13 03:41:53.131803] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.956 [2024-12-13 03:41:53.131810] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.956 [2024-12-13 03:41:53.131815] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.956 [2024-12-13 03:41:53.131820] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x61500001db80 00:30:51.956 [2024-12-13 03:41:53.131829] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.956 [2024-12-13 03:41:53.131836] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.956 [2024-12-13 03:41:53.131840] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.956 [2024-12-13 03:41:53.131845] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500001db80 00:30:51.956 ===================================================== 00:30:51.956 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:51.956 ===================================================== 00:30:51.956 Controller Capabilities/Features 00:30:51.956 ================================ 00:30:51.956 Vendor ID: 8086 00:30:51.956 Subsystem Vendor ID: 8086 00:30:51.956 Serial Number: SPDK00000000000001 00:30:51.956 Model Number: SPDK bdev Controller 00:30:51.956 Firmware Version: 25.01 00:30:51.956 Recommended Arb Burst: 6 00:30:51.956 IEEE OUI Identifier: e4 d2 5c 00:30:51.956 Multi-path I/O 00:30:51.956 May have multiple subsystem ports: Yes 00:30:51.956 May have multiple controllers: Yes 00:30:51.956 Associated with SR-IOV VF: No 00:30:51.956 Max Data Transfer Size: 131072 00:30:51.956 Max Number of Namespaces: 32 00:30:51.956 Max Number of I/O Queues: 127 00:30:51.956 NVMe Specification Version (VS): 1.3 00:30:51.956 NVMe Specification Version (Identify): 1.3 00:30:51.956 Maximum Queue Entries: 128 00:30:51.956 Contiguous Queues Required: Yes 00:30:51.956 Arbitration Mechanisms Supported 00:30:51.956 Weighted Round Robin: Not Supported 00:30:51.956 Vendor Specific: Not Supported 00:30:51.956 Reset Timeout: 15000 ms 00:30:51.956 Doorbell Stride: 4 bytes 00:30:51.956 NVM Subsystem Reset: Not Supported 00:30:51.956 Command Sets Supported 00:30:51.956 NVM Command Set: Supported 00:30:51.956 Boot Partition: Not Supported 00:30:51.956 Memory Page Size Minimum: 4096 bytes 00:30:51.956 Memory Page Size Maximum: 4096 bytes 00:30:51.956 Persistent Memory Region: Not Supported 00:30:51.956 Optional Asynchronous Events Supported 00:30:51.956 Namespace Attribute Notices: Supported 00:30:51.956 Firmware Activation Notices: Not Supported 00:30:51.956 ANA Change Notices: Not Supported 00:30:51.956 PLE Aggregate Log Change Notices: Not Supported 00:30:51.956 LBA Status Info Alert Notices: Not Supported 00:30:51.956 EGE Aggregate Log Change Notices: Not Supported 00:30:51.956 Normal NVM Subsystem Shutdown event: Not Supported 00:30:51.956 Zone Descriptor Change Notices: Not Supported 00:30:51.956 Discovery Log Change Notices: Not Supported 00:30:51.956 Controller Attributes 00:30:51.956 128-bit Host Identifier: Supported 00:30:51.956 Non-Operational Permissive Mode: Not Supported 00:30:51.956 NVM Sets: Not Supported 00:30:51.956 Read Recovery Levels: Not Supported 00:30:51.956 Endurance Groups: Not Supported 00:30:51.956 Predictable Latency Mode: Not Supported 00:30:51.956 Traffic Based Keep ALive: Not Supported 00:30:51.956 Namespace Granularity: Not Supported 00:30:51.956 SQ Associations: Not Supported 00:30:51.956 UUID List: Not Supported 00:30:51.956 Multi-Domain Subsystem: Not Supported 00:30:51.956 Fixed Capacity Management: Not Supported 00:30:51.956 Variable Capacity Management: Not Supported 00:30:51.956 Delete Endurance Group: Not Supported 00:30:51.956 Delete NVM Set: Not Supported 00:30:51.956 Extended LBA Formats Supported: Not Supported 00:30:51.956 Flexible Data Placement Supported: Not Supported 00:30:51.956 00:30:51.956 Controller Memory Buffer Support 00:30:51.956 ================================ 00:30:51.956 Supported: No 00:30:51.956 00:30:51.956 Persistent Memory Region Support 00:30:51.956 ================================ 00:30:51.956 Supported: No 00:30:51.956 00:30:51.956 Admin Command Set Attributes 00:30:51.956 ============================ 00:30:51.956 Security Send/Receive: Not Supported 00:30:51.956 Format NVM: Not Supported 00:30:51.956 Firmware Activate/Download: Not Supported 00:30:51.956 Namespace Management: Not Supported 00:30:51.956 Device Self-Test: Not Supported 00:30:51.956 Directives: Not Supported 00:30:51.956 NVMe-MI: Not Supported 00:30:51.956 Virtualization Management: Not Supported 00:30:51.956 Doorbell Buffer Config: Not Supported 00:30:51.956 Get LBA Status Capability: Not Supported 00:30:51.956 Command & Feature Lockdown Capability: Not Supported 00:30:51.956 Abort Command Limit: 4 00:30:51.956 Async Event Request Limit: 4 00:30:51.956 Number of Firmware Slots: N/A 00:30:51.956 Firmware Slot 1 Read-Only: N/A 00:30:51.956 Firmware Activation Without Reset: N/A 00:30:51.956 Multiple Update Detection Support: N/A 00:30:51.956 Firmware Update Granularity: No Information Provided 00:30:51.956 Per-Namespace SMART Log: No 00:30:51.956 Asymmetric Namespace Access Log Page: Not Supported 00:30:51.956 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:30:51.956 Command Effects Log Page: Supported 00:30:51.956 Get Log Page Extended Data: Supported 00:30:51.956 Telemetry Log Pages: Not Supported 00:30:51.956 Persistent Event Log Pages: Not Supported 00:30:51.956 Supported Log Pages Log Page: May Support 00:30:51.957 Commands Supported & Effects Log Page: Not Supported 00:30:51.957 Feature Identifiers & Effects Log Page:May Support 00:30:51.957 NVMe-MI Commands & Effects Log Page: May Support 00:30:51.957 Data Area 4 for Telemetry Log: Not Supported 00:30:51.957 Error Log Page Entries Supported: 128 00:30:51.957 Keep Alive: Supported 00:30:51.957 Keep Alive Granularity: 10000 ms 00:30:51.957 00:30:51.957 NVM Command Set Attributes 00:30:51.957 ========================== 00:30:51.957 Submission Queue Entry Size 00:30:51.957 Max: 64 00:30:51.957 Min: 64 00:30:51.957 Completion Queue Entry Size 00:30:51.957 Max: 16 00:30:51.957 Min: 16 00:30:51.957 Number of Namespaces: 32 00:30:51.957 Compare Command: Supported 00:30:51.957 Write Uncorrectable Command: Not Supported 00:30:51.957 Dataset Management Command: Supported 00:30:51.957 Write Zeroes Command: Supported 00:30:51.957 Set Features Save Field: Not Supported 00:30:51.957 Reservations: Supported 00:30:51.957 Timestamp: Not Supported 00:30:51.957 Copy: Supported 00:30:51.957 Volatile Write Cache: Present 00:30:51.957 Atomic Write Unit (Normal): 1 00:30:51.957 Atomic Write Unit (PFail): 1 00:30:51.957 Atomic Compare & Write Unit: 1 00:30:51.957 Fused Compare & Write: Supported 00:30:51.957 Scatter-Gather List 00:30:51.957 SGL Command Set: Supported 00:30:51.957 SGL Keyed: Supported 00:30:51.957 SGL Bit Bucket Descriptor: Not Supported 00:30:51.957 SGL Metadata Pointer: Not Supported 00:30:51.957 Oversized SGL: Not Supported 00:30:51.957 SGL Metadata Address: Not Supported 00:30:51.957 SGL Offset: Supported 00:30:51.957 Transport SGL Data Block: Not Supported 00:30:51.957 Replay Protected Memory Block: Not Supported 00:30:51.957 00:30:51.957 Firmware Slot Information 00:30:51.957 ========================= 00:30:51.957 Active slot: 1 00:30:51.957 Slot 1 Firmware Revision: 25.01 00:30:51.957 00:30:51.957 00:30:51.957 Commands Supported and Effects 00:30:51.957 ============================== 00:30:51.957 Admin Commands 00:30:51.957 -------------- 00:30:51.957 Get Log Page (02h): Supported 00:30:51.957 Identify (06h): Supported 00:30:51.957 Abort (08h): Supported 00:30:51.957 Set Features (09h): Supported 00:30:51.957 Get Features (0Ah): Supported 00:30:51.957 Asynchronous Event Request (0Ch): Supported 00:30:51.957 Keep Alive (18h): Supported 00:30:51.957 I/O Commands 00:30:51.957 ------------ 00:30:51.957 Flush (00h): Supported LBA-Change 00:30:51.957 Write (01h): Supported LBA-Change 00:30:51.957 Read (02h): Supported 00:30:51.957 Compare (05h): Supported 00:30:51.957 Write Zeroes (08h): Supported LBA-Change 00:30:51.957 Dataset Management (09h): Supported LBA-Change 00:30:51.957 Copy (19h): Supported LBA-Change 00:30:51.957 00:30:51.957 Error Log 00:30:51.957 ========= 00:30:51.957 00:30:51.957 Arbitration 00:30:51.957 =========== 00:30:51.957 Arbitration Burst: 1 00:30:51.957 00:30:51.957 Power Management 00:30:51.957 ================ 00:30:51.957 Number of Power States: 1 00:30:51.957 Current Power State: Power State #0 00:30:51.957 Power State #0: 00:30:51.957 Max Power: 0.00 W 00:30:51.957 Non-Operational State: Operational 00:30:51.957 Entry Latency: Not Reported 00:30:51.957 Exit Latency: Not Reported 00:30:51.957 Relative Read Throughput: 0 00:30:51.957 Relative Read Latency: 0 00:30:51.957 Relative Write Throughput: 0 00:30:51.957 Relative Write Latency: 0 00:30:51.957 Idle Power: Not Reported 00:30:51.957 Active Power: Not Reported 00:30:51.957 Non-Operational Permissive Mode: Not Supported 00:30:51.957 00:30:51.957 Health Information 00:30:51.957 ================== 00:30:51.957 Critical Warnings: 00:30:51.957 Available Spare Space: OK 00:30:51.957 Temperature: OK 00:30:51.957 Device Reliability: OK 00:30:51.957 Read Only: No 00:30:51.957 Volatile Memory Backup: OK 00:30:51.957 Current Temperature: 0 Kelvin (-273 Celsius) 00:30:51.957 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:30:51.957 Available Spare: 0% 00:30:51.957 Available Spare Threshold: 0% 00:30:51.957 Life Percentage Used:[2024-12-13 03:41:53.131989] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.957 [2024-12-13 03:41:53.131999] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500001db80) 00:30:51.957 [2024-12-13 03:41:53.132011] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.957 [2024-12-13 03:41:53.132028] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:30:51.957 [2024-12-13 03:41:53.132157] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.957 [2024-12-13 03:41:53.132166] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.957 [2024-12-13 03:41:53.132172] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.957 [2024-12-13 03:41:53.132180] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500001db80 00:30:51.957 [2024-12-13 03:41:53.132225] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:30:51.957 [2024-12-13 03:41:53.132242] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:30:51.957 [2024-12-13 03:41:53.132253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.957 [2024-12-13 03:41:53.132260] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500001db80 00:30:51.957 [2024-12-13 03:41:53.132268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.957 [2024-12-13 03:41:53.132274] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500001db80 00:30:51.957 [2024-12-13 03:41:53.132281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.957 [2024-12-13 03:41:53.132287] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:30:51.957 [2024-12-13 03:41:53.132299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.957 [2024-12-13 03:41:53.132309] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.957 [2024-12-13 03:41:53.132316] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.957 [2024-12-13 03:41:53.132322] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:30:51.957 [2024-12-13 03:41:53.132333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.957 [2024-12-13 03:41:53.132350] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:51.957 [2024-12-13 03:41:53.132472] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.957 [2024-12-13 03:41:53.132483] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.957 [2024-12-13 03:41:53.132488] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.957 [2024-12-13 03:41:53.132494] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:30:51.957 [2024-12-13 03:41:53.132505] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.957 [2024-12-13 03:41:53.132513] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.957 [2024-12-13 03:41:53.132519] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:30:51.957 [2024-12-13 03:41:53.132529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.957 [2024-12-13 03:41:53.132548] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:51.957 [2024-12-13 03:41:53.132672] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.957 [2024-12-13 03:41:53.132681] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.957 [2024-12-13 03:41:53.132686] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.957 [2024-12-13 03:41:53.132691] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:30:51.957 [2024-12-13 03:41:53.132698] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:30:51.957 [2024-12-13 03:41:53.132705] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:30:51.957 [2024-12-13 03:41:53.132721] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.957 [2024-12-13 03:41:53.132728] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.957 [2024-12-13 03:41:53.132734] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:30:51.957 [2024-12-13 03:41:53.132743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.957 [2024-12-13 03:41:53.132757] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:51.957 [2024-12-13 03:41:53.132835] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.957 [2024-12-13 03:41:53.132843] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.957 [2024-12-13 03:41:53.132848] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.957 [2024-12-13 03:41:53.132854] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:30:51.957 [2024-12-13 03:41:53.132866] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.957 [2024-12-13 03:41:53.132871] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.957 [2024-12-13 03:41:53.132876] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:30:51.957 [2024-12-13 03:41:53.132886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.957 [2024-12-13 03:41:53.132899] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:51.957 [2024-12-13 03:41:53.132979] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.957 [2024-12-13 03:41:53.132991] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.957 [2024-12-13 03:41:53.132996] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.957 [2024-12-13 03:41:53.133001] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:30:51.957 [2024-12-13 03:41:53.133013] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.958 [2024-12-13 03:41:53.133022] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.958 [2024-12-13 03:41:53.133028] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:30:51.958 [2024-12-13 03:41:53.133037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.958 [2024-12-13 03:41:53.133050] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:51.958 [2024-12-13 03:41:53.133131] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.958 [2024-12-13 03:41:53.133140] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.958 [2024-12-13 03:41:53.133144] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.958 [2024-12-13 03:41:53.133150] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:30:51.958 [2024-12-13 03:41:53.133161] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.958 [2024-12-13 03:41:53.133167] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.958 [2024-12-13 03:41:53.133172] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:30:51.958 [2024-12-13 03:41:53.133183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.958 [2024-12-13 03:41:53.133196] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:51.958 [2024-12-13 03:41:53.133289] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.958 [2024-12-13 03:41:53.133297] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.958 [2024-12-13 03:41:53.133302] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.958 [2024-12-13 03:41:53.133307] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:30:51.958 [2024-12-13 03:41:53.133322] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.958 [2024-12-13 03:41:53.133328] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.958 [2024-12-13 03:41:53.133333] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:30:51.958 [2024-12-13 03:41:53.133341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.958 [2024-12-13 03:41:53.133354] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:51.958 [2024-12-13 03:41:53.133428] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.958 [2024-12-13 03:41:53.133436] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.958 [2024-12-13 03:41:53.133441] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.958 [2024-12-13 03:41:53.133446] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:30:51.958 [2024-12-13 03:41:53.133458] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.958 [2024-12-13 03:41:53.133464] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.958 [2024-12-13 03:41:53.133469] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:30:51.958 [2024-12-13 03:41:53.133478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.958 [2024-12-13 03:41:53.133491] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:51.958 [2024-12-13 03:41:53.133583] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.958 [2024-12-13 03:41:53.133591] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.958 [2024-12-13 03:41:53.133596] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.958 [2024-12-13 03:41:53.133601] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:30:51.958 [2024-12-13 03:41:53.133615] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.958 [2024-12-13 03:41:53.133621] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.958 [2024-12-13 03:41:53.133626] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:30:51.958 [2024-12-13 03:41:53.133634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.958 [2024-12-13 03:41:53.133647] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:51.958 [2024-12-13 03:41:53.133734] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.958 [2024-12-13 03:41:53.133742] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.958 [2024-12-13 03:41:53.133747] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.958 [2024-12-13 03:41:53.133752] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:30:51.958 [2024-12-13 03:41:53.133764] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.958 [2024-12-13 03:41:53.133769] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.958 [2024-12-13 03:41:53.133774] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:30:51.958 [2024-12-13 03:41:53.133783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.958 [2024-12-13 03:41:53.133796] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:51.958 [2024-12-13 03:41:53.133886] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.958 [2024-12-13 03:41:53.133894] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.958 [2024-12-13 03:41:53.133903] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.958 [2024-12-13 03:41:53.133908] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:30:51.958 [2024-12-13 03:41:53.133925] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.958 [2024-12-13 03:41:53.133932] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.958 [2024-12-13 03:41:53.133937] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:30:51.958 [2024-12-13 03:41:53.133946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.958 [2024-12-13 03:41:53.133959] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:51.958 [2024-12-13 03:41:53.134030] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.958 [2024-12-13 03:41:53.134039] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.958 [2024-12-13 03:41:53.134047] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.958 [2024-12-13 03:41:53.134052] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:30:51.958 [2024-12-13 03:41:53.134065] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.958 [2024-12-13 03:41:53.134070] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.958 [2024-12-13 03:41:53.134075] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:30:51.958 [2024-12-13 03:41:53.134084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.958 [2024-12-13 03:41:53.134098] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:51.958 [2024-12-13 03:41:53.134188] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.958 [2024-12-13 03:41:53.134199] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.958 [2024-12-13 03:41:53.134203] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.958 [2024-12-13 03:41:53.134211] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:30:51.958 [2024-12-13 03:41:53.134223] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.958 [2024-12-13 03:41:53.134228] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.958 [2024-12-13 03:41:53.134233] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:30:51.958 [2024-12-13 03:41:53.134242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.958 [2024-12-13 03:41:53.134254] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:51.958 [2024-12-13 03:41:53.134339] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.958 [2024-12-13 03:41:53.134347] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.958 [2024-12-13 03:41:53.134352] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.958 [2024-12-13 03:41:53.134357] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:30:51.958 [2024-12-13 03:41:53.134368] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.958 [2024-12-13 03:41:53.134374] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.958 [2024-12-13 03:41:53.134379] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:30:51.958 [2024-12-13 03:41:53.134393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.958 [2024-12-13 03:41:53.134406] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:51.958 [2024-12-13 03:41:53.137929] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.958 [2024-12-13 03:41:53.137946] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.958 [2024-12-13 03:41:53.137951] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.958 [2024-12-13 03:41:53.137956] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:30:51.958 [2024-12-13 03:41:53.137974] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:51.958 [2024-12-13 03:41:53.137981] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:51.958 [2024-12-13 03:41:53.137991] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:30:51.958 [2024-12-13 03:41:53.138002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.958 [2024-12-13 03:41:53.138020] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:51.958 [2024-12-13 03:41:53.138194] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:51.958 [2024-12-13 03:41:53.138202] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:51.958 [2024-12-13 03:41:53.138209] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:51.958 [2024-12-13 03:41:53.138214] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:30:51.958 [2024-12-13 03:41:53.138225] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:30:52.218 0% 00:30:52.218 Data Units Read: 0 00:30:52.218 Data Units Written: 0 00:30:52.218 Host Read Commands: 0 00:30:52.218 Host Write Commands: 0 00:30:52.218 Controller Busy Time: 0 minutes 00:30:52.218 Power Cycles: 0 00:30:52.218 Power On Hours: 0 hours 00:30:52.218 Unsafe Shutdowns: 0 00:30:52.218 Unrecoverable Media Errors: 0 00:30:52.218 Lifetime Error Log Entries: 0 00:30:52.218 Warning Temperature Time: 0 minutes 00:30:52.218 Critical Temperature Time: 0 minutes 00:30:52.218 00:30:52.218 Number of Queues 00:30:52.218 ================ 00:30:52.218 Number of I/O Submission Queues: 127 00:30:52.218 Number of I/O Completion Queues: 127 00:30:52.218 00:30:52.218 Active Namespaces 00:30:52.218 ================= 00:30:52.218 Namespace ID:1 00:30:52.218 Error Recovery Timeout: Unlimited 00:30:52.218 Command Set Identifier: NVM (00h) 00:30:52.218 Deallocate: Supported 00:30:52.218 Deallocated/Unwritten Error: Not Supported 00:30:52.218 Deallocated Read Value: Unknown 00:30:52.218 Deallocate in Write Zeroes: Not Supported 00:30:52.218 Deallocated Guard Field: 0xFFFF 00:30:52.218 Flush: Supported 00:30:52.218 Reservation: Supported 00:30:52.218 Namespace Sharing Capabilities: Multiple Controllers 00:30:52.218 Size (in LBAs): 131072 (0GiB) 00:30:52.218 Capacity (in LBAs): 131072 (0GiB) 00:30:52.218 Utilization (in LBAs): 131072 (0GiB) 00:30:52.218 NGUID: ABCDEF0123456789ABCDEF0123456789 00:30:52.218 EUI64: ABCDEF0123456789 00:30:52.218 UUID: 06a7d9a1-e90c-4e4e-8dee-34c84f86347b 00:30:52.218 Thin Provisioning: Not Supported 00:30:52.218 Per-NS Atomic Units: Yes 00:30:52.218 Atomic Boundary Size (Normal): 0 00:30:52.218 Atomic Boundary Size (PFail): 0 00:30:52.218 Atomic Boundary Offset: 0 00:30:52.218 Maximum Single Source Range Length: 65535 00:30:52.218 Maximum Copy Length: 65535 00:30:52.218 Maximum Source Range Count: 1 00:30:52.218 NGUID/EUI64 Never Reused: No 00:30:52.218 Namespace Write Protected: No 00:30:52.218 Number of LBA Formats: 1 00:30:52.218 Current LBA Format: LBA Format #00 00:30:52.218 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:52.218 00:30:52.218 03:41:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:30:52.218 03:41:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:52.218 03:41:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.218 03:41:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:52.218 03:41:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.218 03:41:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:30:52.218 03:41:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:30:52.218 03:41:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:52.218 03:41:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:30:52.218 03:41:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:52.218 03:41:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:30:52.218 03:41:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:52.218 03:41:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:52.218 rmmod nvme_tcp 00:30:52.218 rmmod nvme_fabrics 00:30:52.218 rmmod nvme_keyring 00:30:52.218 03:41:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:52.218 03:41:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:30:52.218 03:41:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:30:52.218 03:41:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2819278 ']' 00:30:52.218 03:41:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2819278 00:30:52.218 03:41:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2819278 ']' 00:30:52.218 03:41:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2819278 00:30:52.218 03:41:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:30:52.218 03:41:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:52.218 03:41:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2819278 00:30:52.218 03:41:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:52.218 03:41:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:52.218 03:41:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2819278' 00:30:52.219 killing process with pid 2819278 00:30:52.219 03:41:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2819278 00:30:52.219 03:41:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2819278 00:30:53.597 03:41:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:53.597 03:41:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:53.597 03:41:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:53.597 03:41:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:30:53.597 03:41:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:30:53.597 03:41:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:30:53.597 03:41:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:53.597 03:41:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:53.597 03:41:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:53.597 03:41:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:53.597 03:41:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:53.597 03:41:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:56.131 03:41:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:56.131 00:30:56.131 real 0m10.813s 00:30:56.131 user 0m11.957s 00:30:56.131 sys 0m4.653s 00:30:56.131 03:41:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:56.131 03:41:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:56.131 ************************************ 00:30:56.131 END TEST nvmf_identify 00:30:56.131 ************************************ 00:30:56.131 03:41:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:56.131 03:41:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:56.131 03:41:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:56.131 03:41:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.131 ************************************ 00:30:56.131 START TEST nvmf_perf 00:30:56.131 ************************************ 00:30:56.131 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:56.131 * Looking for test storage... 00:30:56.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:56.131 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:56.131 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:30:56.131 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:56.131 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:56.131 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:56.131 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:56.131 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:56.131 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:30:56.131 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:30:56.131 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:30:56.131 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:30:56.132 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:30:56.132 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:30:56.132 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:30:56.132 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:56.132 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:30:56.132 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:30:56.132 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:56.132 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:56.132 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:30:56.132 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:30:56.132 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:56.132 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:30:56.132 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:56.132 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:30:56.132 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:30:56.132 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:56.132 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:30:56.132 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:56.132 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:56.132 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:56.132 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:30:56.132 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:56.132 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:56.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.132 --rc genhtml_branch_coverage=1 00:30:56.132 --rc genhtml_function_coverage=1 00:30:56.132 --rc genhtml_legend=1 00:30:56.132 --rc geninfo_all_blocks=1 00:30:56.132 --rc geninfo_unexecuted_blocks=1 00:30:56.132 00:30:56.132 ' 00:30:56.132 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:56.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.132 --rc genhtml_branch_coverage=1 00:30:56.132 --rc genhtml_function_coverage=1 00:30:56.132 --rc genhtml_legend=1 00:30:56.132 --rc geninfo_all_blocks=1 00:30:56.132 --rc geninfo_unexecuted_blocks=1 00:30:56.132 00:30:56.132 ' 00:30:56.132 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:56.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.132 --rc genhtml_branch_coverage=1 00:30:56.132 --rc genhtml_function_coverage=1 00:30:56.132 --rc genhtml_legend=1 00:30:56.132 --rc geninfo_all_blocks=1 00:30:56.132 --rc geninfo_unexecuted_blocks=1 00:30:56.132 00:30:56.132 ' 00:30:56.132 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:56.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.132 --rc genhtml_branch_coverage=1 00:30:56.132 --rc genhtml_function_coverage=1 00:30:56.132 --rc genhtml_legend=1 00:30:56.132 --rc geninfo_all_blocks=1 00:30:56.132 --rc geninfo_unexecuted_blocks=1 00:30:56.132 00:30:56.132 ' 00:30:56.132 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:56.132 03:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:56.132 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:30:56.132 03:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:01.410 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:01.410 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:01.410 Found net devices under 0000:af:00.0: cvl_0_0 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:01.410 Found net devices under 0000:af:00.1: cvl_0_1 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:01.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:01.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:31:01.410 00:31:01.410 --- 10.0.0.2 ping statistics --- 00:31:01.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.410 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:31:01.410 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:01.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:01.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:31:01.410 00:31:01.410 --- 10.0.0.1 ping statistics --- 00:31:01.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.411 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:31:01.411 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:01.411 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:31:01.411 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:01.411 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:01.411 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:01.411 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:01.411 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:01.411 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:01.411 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:01.411 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:31:01.411 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:01.411 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:01.411 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:01.411 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:01.411 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2823191 00:31:01.411 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2823191 00:31:01.411 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2823191 ']' 00:31:01.411 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:01.411 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:01.411 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:01.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:01.411 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:01.411 03:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:01.411 [2024-12-13 03:42:02.525487] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:31:01.411 [2024-12-13 03:42:02.525577] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:01.695 [2024-12-13 03:42:02.644427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:01.695 [2024-12-13 03:42:02.765527] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:01.695 [2024-12-13 03:42:02.765571] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:01.695 [2024-12-13 03:42:02.765583] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:01.695 [2024-12-13 03:42:02.765595] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:01.695 [2024-12-13 03:42:02.765604] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:01.695 [2024-12-13 03:42:02.768276] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:01.695 [2024-12-13 03:42:02.768303] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:31:01.695 [2024-12-13 03:42:02.768322] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:01.695 [2024-12-13 03:42:02.768328] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:31:02.325 03:42:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:02.325 03:42:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:31:02.325 03:42:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:02.325 03:42:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:02.325 03:42:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:02.325 03:42:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:02.325 03:42:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:02.325 03:42:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:05.635 03:42:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:31:05.635 03:42:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:31:05.635 03:42:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:31:05.635 03:42:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:05.893 03:42:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:31:05.893 03:42:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:31:05.893 03:42:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:31:05.893 03:42:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:31:05.893 03:42:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:31:06.151 [2024-12-13 03:42:07.178847] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:06.151 03:42:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:06.409 03:42:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:06.409 03:42:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:06.409 03:42:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:06.409 03:42:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:06.667 03:42:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:06.930 [2024-12-13 03:42:07.973332] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:06.930 03:42:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:07.189 03:42:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:31:07.189 03:42:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:31:07.189 03:42:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:31:07.189 03:42:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:31:08.566 Initializing NVMe Controllers 00:31:08.566 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:31:08.566 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:31:08.566 Initialization complete. Launching workers. 00:31:08.566 ======================================================== 00:31:08.566 Latency(us) 00:31:08.566 Device Information : IOPS MiB/s Average min max 00:31:08.566 PCIE (0000:5e:00.0) NSID 1 from core 0: 91628.99 357.93 348.72 41.79 4441.98 00:31:08.566 ======================================================== 00:31:08.566 Total : 91628.99 357.93 348.72 41.79 4441.98 00:31:08.566 00:31:08.566 03:42:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:09.943 Initializing NVMe Controllers 00:31:09.943 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:09.943 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:09.943 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:09.943 Initialization complete. Launching workers. 00:31:09.943 ======================================================== 00:31:09.943 Latency(us) 00:31:09.943 Device Information : IOPS MiB/s Average min max 00:31:09.943 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 129.67 0.51 7974.54 129.19 44851.41 00:31:09.943 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 75.81 0.30 13704.62 5351.31 49511.00 00:31:09.943 ======================================================== 00:31:09.943 Total : 205.48 0.80 10088.55 129.19 49511.00 00:31:09.943 00:31:09.943 03:42:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:11.319 Initializing NVMe Controllers 00:31:11.319 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:11.319 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:11.319 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:11.319 Initialization complete. Launching workers. 00:31:11.319 ======================================================== 00:31:11.319 Latency(us) 00:31:11.319 Device Information : IOPS MiB/s Average min max 00:31:11.319 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9430.08 36.84 3390.48 531.35 6973.41 00:31:11.319 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3830.81 14.96 8358.65 6307.31 16264.29 00:31:11.319 ======================================================== 00:31:11.319 Total : 13260.89 51.80 4825.69 531.35 16264.29 00:31:11.319 00:31:11.319 03:42:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:31:11.320 03:42:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:31:11.320 03:42:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:14.613 Initializing NVMe Controllers 00:31:14.613 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:14.613 Controller IO queue size 128, less than required. 00:31:14.613 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:14.613 Controller IO queue size 128, less than required. 00:31:14.613 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:14.613 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:14.613 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:14.613 Initialization complete. Launching workers. 00:31:14.614 ======================================================== 00:31:14.614 Latency(us) 00:31:14.614 Device Information : IOPS MiB/s Average min max 00:31:14.614 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1520.41 380.10 87412.27 55896.15 302495.42 00:31:14.614 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 571.96 142.99 249185.89 119014.11 579219.40 00:31:14.614 ======================================================== 00:31:14.614 Total : 2092.37 523.09 131633.99 55896.15 579219.40 00:31:14.614 00:31:14.614 03:42:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:31:14.614 No valid NVMe controllers or AIO or URING devices found 00:31:14.614 Initializing NVMe Controllers 00:31:14.614 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:14.614 Controller IO queue size 128, less than required. 00:31:14.614 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:14.614 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:31:14.614 Controller IO queue size 128, less than required. 00:31:14.614 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:14.614 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:31:14.614 WARNING: Some requested NVMe devices were skipped 00:31:14.614 03:42:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:31:17.906 Initializing NVMe Controllers 00:31:17.906 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:17.906 Controller IO queue size 128, less than required. 00:31:17.906 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:17.906 Controller IO queue size 128, less than required. 00:31:17.906 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:17.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:17.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:17.906 Initialization complete. Launching workers. 00:31:17.906 00:31:17.906 ==================== 00:31:17.906 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:31:17.906 TCP transport: 00:31:17.906 polls: 8512 00:31:17.906 idle_polls: 5737 00:31:17.906 sock_completions: 2775 00:31:17.906 nvme_completions: 5197 00:31:17.906 submitted_requests: 7810 00:31:17.906 queued_requests: 1 00:31:17.906 00:31:17.906 ==================== 00:31:17.906 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:31:17.906 TCP transport: 00:31:17.906 polls: 11174 00:31:17.906 idle_polls: 7847 00:31:17.906 sock_completions: 3327 00:31:17.906 nvme_completions: 5605 00:31:17.906 submitted_requests: 8400 00:31:17.906 queued_requests: 1 00:31:17.906 ======================================================== 00:31:17.906 Latency(us) 00:31:17.906 Device Information : IOPS MiB/s Average min max 00:31:17.906 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1297.45 324.36 105239.28 51743.87 408900.43 00:31:17.906 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1399.33 349.83 93127.54 53849.16 477485.62 00:31:17.906 ======================================================== 00:31:17.906 Total : 2696.78 674.19 98954.64 51743.87 477485.62 00:31:17.906 00:31:17.906 03:42:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:31:17.906 03:42:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:18.165 03:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:31:18.165 03:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:31:18.165 03:42:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:31:21.455 03:42:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=1216622d-1872-42ca-99de-73e4273517b8 00:31:21.455 03:42:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 1216622d-1872-42ca-99de-73e4273517b8 00:31:21.455 03:42:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=1216622d-1872-42ca-99de-73e4273517b8 00:31:21.455 03:42:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:21.455 03:42:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:31:21.455 03:42:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:31:21.455 03:42:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:21.714 03:42:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:21.714 { 00:31:21.714 "uuid": "1216622d-1872-42ca-99de-73e4273517b8", 00:31:21.714 "name": "lvs_0", 00:31:21.714 "base_bdev": "Nvme0n1", 00:31:21.714 "total_data_clusters": 238234, 00:31:21.714 "free_clusters": 238234, 00:31:21.714 "block_size": 512, 00:31:21.714 "cluster_size": 4194304 00:31:21.714 } 00:31:21.714 ]' 00:31:21.714 03:42:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="1216622d-1872-42ca-99de-73e4273517b8") .free_clusters' 00:31:21.714 03:42:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=238234 00:31:21.714 03:42:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="1216622d-1872-42ca-99de-73e4273517b8") .cluster_size' 00:31:21.714 03:42:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:21.714 03:42:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=952936 00:31:21.714 03:42:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 952936 00:31:21.714 952936 00:31:21.714 03:42:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:31:21.714 03:42:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:31:21.714 03:42:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1216622d-1872-42ca-99de-73e4273517b8 lbd_0 20480 00:31:21.973 03:42:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=541ffe1e-b53e-4439-8d2e-515e8fd14225 00:31:21.973 03:42:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 541ffe1e-b53e-4439-8d2e-515e8fd14225 lvs_n_0 00:31:22.909 03:42:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=3b1404b9-8208-4ef3-b6cc-28bdb343beff 00:31:22.909 03:42:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 3b1404b9-8208-4ef3-b6cc-28bdb343beff 00:31:22.909 03:42:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=3b1404b9-8208-4ef3-b6cc-28bdb343beff 00:31:22.909 03:42:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:22.909 03:42:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:31:22.909 03:42:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:31:22.909 03:42:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:22.909 03:42:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:22.909 { 00:31:22.909 "uuid": "1216622d-1872-42ca-99de-73e4273517b8", 00:31:22.909 "name": "lvs_0", 00:31:22.909 "base_bdev": "Nvme0n1", 00:31:22.909 "total_data_clusters": 238234, 00:31:22.909 "free_clusters": 233114, 00:31:22.909 "block_size": 512, 00:31:22.909 "cluster_size": 4194304 00:31:22.909 }, 00:31:22.909 { 00:31:22.909 "uuid": "3b1404b9-8208-4ef3-b6cc-28bdb343beff", 00:31:22.909 "name": "lvs_n_0", 00:31:22.909 "base_bdev": "541ffe1e-b53e-4439-8d2e-515e8fd14225", 00:31:22.909 "total_data_clusters": 5114, 00:31:22.909 "free_clusters": 5114, 00:31:22.909 "block_size": 512, 00:31:22.909 "cluster_size": 4194304 00:31:22.909 } 00:31:22.909 ]' 00:31:22.909 03:42:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="3b1404b9-8208-4ef3-b6cc-28bdb343beff") .free_clusters' 00:31:22.909 03:42:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:31:22.909 03:42:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="3b1404b9-8208-4ef3-b6cc-28bdb343beff") .cluster_size' 00:31:23.168 03:42:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:23.168 03:42:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:31:23.168 03:42:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:31:23.168 20456 00:31:23.168 03:42:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:31:23.168 03:42:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3b1404b9-8208-4ef3-b6cc-28bdb343beff lbd_nest_0 20456 00:31:23.168 03:42:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=eb72971b-852b-4106-8241-a9f83152684e 00:31:23.168 03:42:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:23.426 03:42:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:31:23.426 03:42:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 eb72971b-852b-4106-8241-a9f83152684e 00:31:23.685 03:42:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:23.944 03:42:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:31:23.944 03:42:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:31:23.944 03:42:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:23.944 03:42:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:23.944 03:42:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:36.161 Initializing NVMe Controllers 00:31:36.161 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:36.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:36.161 Initialization complete. Launching workers. 00:31:36.161 ======================================================== 00:31:36.161 Latency(us) 00:31:36.161 Device Information : IOPS MiB/s Average min max 00:31:36.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.60 0.02 21058.29 156.44 47886.38 00:31:36.161 ======================================================== 00:31:36.161 Total : 47.60 0.02 21058.29 156.44 47886.38 00:31:36.161 00:31:36.161 03:42:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:36.162 03:42:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:46.136 Initializing NVMe Controllers 00:31:46.136 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:46.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:46.136 Initialization complete. Launching workers. 00:31:46.136 ======================================================== 00:31:46.137 Latency(us) 00:31:46.137 Device Information : IOPS MiB/s Average min max 00:31:46.137 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 61.69 7.71 16233.85 5681.21 48879.32 00:31:46.137 ======================================================== 00:31:46.137 Total : 61.69 7.71 16233.85 5681.21 48879.32 00:31:46.137 00:31:46.137 03:42:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:46.137 03:42:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:46.137 03:42:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:56.113 Initializing NVMe Controllers 00:31:56.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:56.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:56.113 Initialization complete. Launching workers. 00:31:56.113 ======================================================== 00:31:56.113 Latency(us) 00:31:56.113 Device Information : IOPS MiB/s Average min max 00:31:56.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8194.21 4.00 3905.02 317.05 7957.10 00:31:56.113 ======================================================== 00:31:56.113 Total : 8194.21 4.00 3905.02 317.05 7957.10 00:31:56.113 00:31:56.113 03:42:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:56.113 03:42:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:06.092 Initializing NVMe Controllers 00:32:06.092 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:06.092 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:06.092 Initialization complete. Launching workers. 00:32:06.092 ======================================================== 00:32:06.092 Latency(us) 00:32:06.092 Device Information : IOPS MiB/s Average min max 00:32:06.092 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3884.80 485.60 8239.17 570.79 25843.19 00:32:06.092 ======================================================== 00:32:06.092 Total : 3884.80 485.60 8239.17 570.79 25843.19 00:32:06.092 00:32:06.092 03:43:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:06.092 03:43:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:06.092 03:43:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:18.299 Initializing NVMe Controllers 00:32:18.299 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:18.299 Controller IO queue size 128, less than required. 00:32:18.299 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:18.299 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:18.299 Initialization complete. Launching workers. 00:32:18.299 ======================================================== 00:32:18.299 Latency(us) 00:32:18.299 Device Information : IOPS MiB/s Average min max 00:32:18.299 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13080.43 6.39 9784.85 1617.12 23892.97 00:32:18.299 ======================================================== 00:32:18.299 Total : 13080.43 6.39 9784.85 1617.12 23892.97 00:32:18.299 00:32:18.299 03:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:18.299 03:43:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:28.277 Initializing NVMe Controllers 00:32:28.277 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:28.277 Controller IO queue size 128, less than required. 00:32:28.277 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:28.277 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:28.277 Initialization complete. Launching workers. 00:32:28.277 ======================================================== 00:32:28.277 Latency(us) 00:32:28.277 Device Information : IOPS MiB/s Average min max 00:32:28.277 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1206.46 150.81 106476.68 16031.96 225534.71 00:32:28.277 ======================================================== 00:32:28.277 Total : 1206.46 150.81 106476.68 16031.96 225534.71 00:32:28.277 00:32:28.277 03:43:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:28.277 03:43:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete eb72971b-852b-4106-8241-a9f83152684e 00:32:28.277 03:43:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:28.277 03:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 541ffe1e-b53e-4439-8d2e-515e8fd14225 00:32:28.277 03:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:28.536 03:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:32:28.536 03:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:32:28.536 03:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:28.536 03:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:32:28.536 03:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:28.536 03:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:32:28.536 03:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:28.536 03:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:28.536 rmmod nvme_tcp 00:32:28.536 rmmod nvme_fabrics 00:32:28.536 rmmod nvme_keyring 00:32:28.536 03:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:28.536 03:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:32:28.536 03:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:32:28.536 03:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2823191 ']' 00:32:28.536 03:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2823191 00:32:28.536 03:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2823191 ']' 00:32:28.536 03:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2823191 00:32:28.536 03:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:32:28.536 03:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:28.536 03:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2823191 00:32:28.795 03:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:28.795 03:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:28.795 03:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2823191' 00:32:28.795 killing process with pid 2823191 00:32:28.795 03:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2823191 00:32:28.795 03:43:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2823191 00:32:31.331 03:43:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:31.331 03:43:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:31.331 03:43:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:31.331 03:43:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:32:31.331 03:43:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:31.331 03:43:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:32:31.331 03:43:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:32:31.331 03:43:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:31.331 03:43:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:31.332 03:43:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:31.332 03:43:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:31.332 03:43:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:33.351 03:43:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:33.351 00:32:33.351 real 1m37.450s 00:32:33.351 user 5m49.991s 00:32:33.351 sys 0m16.391s 00:32:33.351 03:43:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:33.351 03:43:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:33.351 ************************************ 00:32:33.351 END TEST nvmf_perf 00:32:33.351 ************************************ 00:32:33.351 03:43:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:33.351 03:43:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:33.351 03:43:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:33.351 03:43:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.351 ************************************ 00:32:33.351 START TEST nvmf_fio_host 00:32:33.351 ************************************ 00:32:33.351 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:33.351 * Looking for test storage... 00:32:33.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:33.351 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:33.351 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:32:33.351 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:33.351 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:33.351 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:33.351 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:33.351 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:33.351 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:32:33.351 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:32:33.351 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:32:33.351 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:32:33.351 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:32:33.351 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:32:33.351 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:32:33.351 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:33.351 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:32:33.351 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:32:33.351 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:33.351 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:33.351 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:32:33.351 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:32:33.351 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:33.351 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:33.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.352 --rc genhtml_branch_coverage=1 00:32:33.352 --rc genhtml_function_coverage=1 00:32:33.352 --rc genhtml_legend=1 00:32:33.352 --rc geninfo_all_blocks=1 00:32:33.352 --rc geninfo_unexecuted_blocks=1 00:32:33.352 00:32:33.352 ' 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:33.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.352 --rc genhtml_branch_coverage=1 00:32:33.352 --rc genhtml_function_coverage=1 00:32:33.352 --rc genhtml_legend=1 00:32:33.352 --rc geninfo_all_blocks=1 00:32:33.352 --rc geninfo_unexecuted_blocks=1 00:32:33.352 00:32:33.352 ' 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:33.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.352 --rc genhtml_branch_coverage=1 00:32:33.352 --rc genhtml_function_coverage=1 00:32:33.352 --rc genhtml_legend=1 00:32:33.352 --rc geninfo_all_blocks=1 00:32:33.352 --rc geninfo_unexecuted_blocks=1 00:32:33.352 00:32:33.352 ' 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:33.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.352 --rc genhtml_branch_coverage=1 00:32:33.352 --rc genhtml_function_coverage=1 00:32:33.352 --rc genhtml_legend=1 00:32:33.352 --rc geninfo_all_blocks=1 00:32:33.352 --rc geninfo_unexecuted_blocks=1 00:32:33.352 00:32:33.352 ' 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:33.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:33.352 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:33.353 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:33.353 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:33.353 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:33.353 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:33.353 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:33.353 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:33.353 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:33.353 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:33.353 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:32:33.353 03:43:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.626 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:38.626 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:32:38.626 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:38.626 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:38.627 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:38.627 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:38.627 Found net devices under 0000:af:00.0: cvl_0_0 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:38.627 Found net devices under 0000:af:00.1: cvl_0_1 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:38.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:38.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:32:38.627 00:32:38.627 --- 10.0.0.2 ping statistics --- 00:32:38.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:38.627 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:38.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:38.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:32:38.627 00:32:38.627 --- 10.0.0.1 ping statistics --- 00:32:38.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:38.627 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:38.627 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:38.886 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:32:38.886 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:32:38.886 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:38.886 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.886 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2841113 00:32:38.886 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:38.886 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:38.886 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2841113 00:32:38.886 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2841113 ']' 00:32:38.886 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:38.886 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:38.886 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:38.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:38.886 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:38.886 03:43:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.886 [2024-12-13 03:43:39.933791] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:32:38.886 [2024-12-13 03:43:39.933888] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:38.886 [2024-12-13 03:43:40.056530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:39.145 [2024-12-13 03:43:40.171372] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:39.145 [2024-12-13 03:43:40.171416] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:39.145 [2024-12-13 03:43:40.171427] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:39.145 [2024-12-13 03:43:40.171438] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:39.145 [2024-12-13 03:43:40.171445] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:39.145 [2024-12-13 03:43:40.173805] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:39.145 [2024-12-13 03:43:40.173881] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:32:39.145 [2024-12-13 03:43:40.173951] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:39.145 [2024-12-13 03:43:40.173957] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:32:39.711 03:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:39.711 03:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:32:39.711 03:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:39.969 [2024-12-13 03:43:40.935907] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:39.969 03:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:32:39.969 03:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:39.969 03:43:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.969 03:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:32:40.228 Malloc1 00:32:40.228 03:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:40.487 03:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:40.487 03:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:40.746 [2024-12-13 03:43:41.825588] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:40.746 03:43:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:41.005 03:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:41.005 03:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:41.005 03:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:41.005 03:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:41.005 03:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:41.005 03:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:41.005 03:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:41.005 03:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:41.005 03:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:41.005 03:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:41.005 03:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:41.005 03:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:41.005 03:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:41.005 03:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:41.005 03:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:41.005 03:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:41.005 03:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:41.005 03:43:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:41.263 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:41.263 fio-3.35 00:32:41.263 Starting 1 thread 00:32:43.796 00:32:43.796 test: (groupid=0, jobs=1): err= 0: pid=2841697: Fri Dec 13 03:43:44 2024 00:32:43.796 read: IOPS=10.1k, BW=39.6MiB/s (41.5MB/s)(79.4MiB/2006msec) 00:32:43.796 slat (nsec): min=1708, max=270189, avg=1919.78, stdev=2664.10 00:32:43.796 clat (usec): min=3736, max=12294, avg=6932.60, stdev=530.69 00:32:43.796 lat (usec): min=3769, max=12295, avg=6934.52, stdev=530.52 00:32:43.796 clat percentiles (usec): 00:32:43.796 | 1.00th=[ 5669], 5.00th=[ 6063], 10.00th=[ 6259], 20.00th=[ 6521], 00:32:43.796 | 30.00th=[ 6652], 40.00th=[ 6849], 50.00th=[ 6915], 60.00th=[ 7046], 00:32:43.796 | 70.00th=[ 7242], 80.00th=[ 7373], 90.00th=[ 7570], 95.00th=[ 7767], 00:32:43.796 | 99.00th=[ 8094], 99.50th=[ 8160], 99.90th=[10028], 99.95th=[10814], 00:32:43.796 | 99.99th=[11469] 00:32:43.796 bw ( KiB/s): min=39528, max=41024, per=99.99%, avg=40524.00, stdev=676.82, samples=4 00:32:43.796 iops : min= 9882, max=10256, avg=10131.00, stdev=169.21, samples=4 00:32:43.796 write: IOPS=10.1k, BW=39.6MiB/s (41.6MB/s)(79.5MiB/2006msec); 0 zone resets 00:32:43.796 slat (nsec): min=1751, max=247631, avg=1991.21, stdev=2007.22 00:32:43.796 clat (usec): min=2883, max=11273, avg=5620.16, stdev=446.99 00:32:43.796 lat (usec): min=2918, max=11275, avg=5622.15, stdev=446.88 00:32:43.796 clat percentiles (usec): 00:32:43.796 | 1.00th=[ 4686], 5.00th=[ 4948], 10.00th=[ 5080], 20.00th=[ 5276], 00:32:43.796 | 30.00th=[ 5407], 40.00th=[ 5538], 50.00th=[ 5604], 60.00th=[ 5735], 00:32:43.796 | 70.00th=[ 5800], 80.00th=[ 5932], 90.00th=[ 6128], 95.00th=[ 6259], 00:32:43.796 | 99.00th=[ 6587], 99.50th=[ 6718], 99.90th=[ 9765], 99.95th=[10814], 00:32:43.796 | 99.99th=[11207] 00:32:43.796 bw ( KiB/s): min=40064, max=41032, per=99.98%, avg=40592.00, stdev=411.10, samples=4 00:32:43.796 iops : min=10016, max=10258, avg=10148.00, stdev=102.77, samples=4 00:32:43.796 lat (msec) : 4=0.05%, 10=99.87%, 20=0.08% 00:32:43.796 cpu : usr=74.86%, sys=23.79%, ctx=133, majf=0, minf=1505 00:32:43.796 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:43.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:43.796 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:43.796 issued rwts: total=20324,20361,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:43.796 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:43.796 00:32:43.796 Run status group 0 (all jobs): 00:32:43.796 READ: bw=39.6MiB/s (41.5MB/s), 39.6MiB/s-39.6MiB/s (41.5MB/s-41.5MB/s), io=79.4MiB (83.2MB), run=2006-2006msec 00:32:43.796 WRITE: bw=39.6MiB/s (41.6MB/s), 39.6MiB/s-39.6MiB/s (41.6MB/s-41.6MB/s), io=79.5MiB (83.4MB), run=2006-2006msec 00:32:44.058 ----------------------------------------------------- 00:32:44.058 Suppressions used: 00:32:44.058 count bytes template 00:32:44.058 1 57 /usr/src/fio/parse.c 00:32:44.058 1 8 libtcmalloc_minimal.so 00:32:44.059 ----------------------------------------------------- 00:32:44.059 00:32:44.059 03:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:44.059 03:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:44.059 03:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:44.059 03:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:44.059 03:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:44.059 03:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:44.059 03:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:44.059 03:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:44.059 03:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:44.059 03:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:44.059 03:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:44.059 03:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:44.059 03:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:44.059 03:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:44.059 03:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:44.059 03:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:44.059 03:43:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:44.319 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:32:44.319 fio-3.35 00:32:44.319 Starting 1 thread 00:32:46.854 00:32:46.854 test: (groupid=0, jobs=1): err= 0: pid=2842250: Fri Dec 13 03:43:47 2024 00:32:46.854 read: IOPS=9418, BW=147MiB/s (154MB/s)(295MiB/2005msec) 00:32:46.854 slat (usec): min=2, max=113, avg= 3.26, stdev= 1.89 00:32:46.854 clat (usec): min=1602, max=14776, avg=7820.44, stdev=1811.61 00:32:46.854 lat (usec): min=1605, max=14780, avg=7823.70, stdev=1811.70 00:32:46.854 clat percentiles (usec): 00:32:46.854 | 1.00th=[ 4178], 5.00th=[ 5014], 10.00th=[ 5538], 20.00th=[ 6259], 00:32:46.854 | 30.00th=[ 6718], 40.00th=[ 7242], 50.00th=[ 7767], 60.00th=[ 8291], 00:32:46.854 | 70.00th=[ 8717], 80.00th=[ 9372], 90.00th=[10028], 95.00th=[10814], 00:32:46.854 | 99.00th=[12780], 99.50th=[13304], 99.90th=[14222], 99.95th=[14484], 00:32:46.854 | 99.99th=[14746] 00:32:46.854 bw ( KiB/s): min=67744, max=87392, per=49.67%, avg=74848.00, stdev=8640.99, samples=4 00:32:46.854 iops : min= 4234, max= 5462, avg=4678.00, stdev=540.06, samples=4 00:32:46.854 write: IOPS=5550, BW=86.7MiB/s (90.9MB/s)(153MiB/1767msec); 0 zone resets 00:32:46.854 slat (usec): min=27, max=280, avg=33.03, stdev= 6.90 00:32:46.854 clat (usec): min=4391, max=15981, avg=9948.05, stdev=1689.53 00:32:46.854 lat (usec): min=4421, max=16013, avg=9981.08, stdev=1690.00 00:32:46.854 clat percentiles (usec): 00:32:46.854 | 1.00th=[ 6849], 5.00th=[ 7570], 10.00th=[ 7963], 20.00th=[ 8455], 00:32:46.854 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[10159], 00:32:46.854 | 70.00th=[10683], 80.00th=[11469], 90.00th=[12518], 95.00th=[13173], 00:32:46.854 | 99.00th=[14222], 99.50th=[14615], 99.90th=[15533], 99.95th=[15795], 00:32:46.854 | 99.99th=[15926] 00:32:46.854 bw ( KiB/s): min=70496, max=90432, per=87.65%, avg=77832.00, stdev=8744.31, samples=4 00:32:46.854 iops : min= 4406, max= 5652, avg=4864.50, stdev=546.52, samples=4 00:32:46.854 lat (msec) : 2=0.03%, 4=0.42%, 10=77.91%, 20=21.63% 00:32:46.854 cpu : usr=83.04%, sys=13.37%, ctx=185, majf=0, minf=2392 00:32:46.854 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:32:46.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:46.854 issued rwts: total=18884,9807,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:46.854 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:46.854 00:32:46.854 Run status group 0 (all jobs): 00:32:46.854 READ: bw=147MiB/s (154MB/s), 147MiB/s-147MiB/s (154MB/s-154MB/s), io=295MiB (309MB), run=2005-2005msec 00:32:46.854 WRITE: bw=86.7MiB/s (90.9MB/s), 86.7MiB/s-86.7MiB/s (90.9MB/s-90.9MB/s), io=153MiB (161MB), run=1767-1767msec 00:32:47.113 ----------------------------------------------------- 00:32:47.113 Suppressions used: 00:32:47.113 count bytes template 00:32:47.113 1 57 /usr/src/fio/parse.c 00:32:47.113 328 31488 /usr/src/fio/iolog.c 00:32:47.113 1 8 libtcmalloc_minimal.so 00:32:47.113 ----------------------------------------------------- 00:32:47.113 00:32:47.113 03:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:47.373 03:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:32:47.373 03:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:32:47.373 03:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:32:47.373 03:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:47.373 03:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:32:47.373 03:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:47.373 03:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:47.373 03:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:47.373 03:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:47.373 03:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:32:47.373 03:43:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 10.0.0.2 00:32:50.663 Nvme0n1 00:32:50.663 03:43:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:32:53.197 03:43:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=b8a635af-aa76-4253-9bf6-380bec39d666 00:32:53.197 03:43:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb b8a635af-aa76-4253-9bf6-380bec39d666 00:32:53.197 03:43:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=b8a635af-aa76-4253-9bf6-380bec39d666 00:32:53.197 03:43:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:53.197 03:43:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:32:53.197 03:43:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:32:53.197 03:43:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:53.456 03:43:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:53.456 { 00:32:53.456 "uuid": "b8a635af-aa76-4253-9bf6-380bec39d666", 00:32:53.456 "name": "lvs_0", 00:32:53.456 "base_bdev": "Nvme0n1", 00:32:53.456 "total_data_clusters": 930, 00:32:53.456 "free_clusters": 930, 00:32:53.456 "block_size": 512, 00:32:53.456 "cluster_size": 1073741824 00:32:53.456 } 00:32:53.456 ]' 00:32:53.456 03:43:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="b8a635af-aa76-4253-9bf6-380bec39d666") .free_clusters' 00:32:53.456 03:43:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=930 00:32:53.456 03:43:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="b8a635af-aa76-4253-9bf6-380bec39d666") .cluster_size' 00:32:53.456 03:43:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:32:53.456 03:43:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=952320 00:32:53.456 03:43:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 952320 00:32:53.456 952320 00:32:53.456 03:43:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:32:54.025 549ad317-ce18-426b-bb2e-df45ad55485e 00:32:54.025 03:43:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:32:54.025 03:43:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:32:54.284 03:43:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:54.542 03:43:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:54.542 03:43:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:54.542 03:43:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:54.542 03:43:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:54.542 03:43:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:54.542 03:43:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:54.542 03:43:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:54.542 03:43:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:54.542 03:43:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:54.542 03:43:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:54.542 03:43:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:54.542 03:43:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:54.542 03:43:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:54.542 03:43:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:54.542 03:43:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:54.542 03:43:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:54.542 03:43:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:54.800 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:54.800 fio-3.35 00:32:54.800 Starting 1 thread 00:32:57.334 00:32:57.334 test: (groupid=0, jobs=1): err= 0: pid=2844061: Fri Dec 13 03:43:58 2024 00:32:57.334 read: IOPS=6957, BW=27.2MiB/s (28.5MB/s)(54.5MiB/2007msec) 00:32:57.334 slat (nsec): min=1690, max=360595, avg=1889.67, stdev=3198.66 00:32:57.334 clat (usec): min=659, max=170508, avg=10020.70, stdev=10947.26 00:32:57.334 lat (usec): min=662, max=170533, avg=10022.59, stdev=10947.47 00:32:57.334 clat percentiles (msec): 00:32:57.334 | 1.00th=[ 8], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 9], 00:32:57.334 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 10], 60.00th=[ 10], 00:32:57.334 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 11], 95.00th=[ 11], 00:32:57.334 | 99.00th=[ 12], 99.50th=[ 15], 99.90th=[ 171], 99.95th=[ 171], 00:32:57.334 | 99.99th=[ 171] 00:32:57.334 bw ( KiB/s): min=19448, max=30704, per=99.88%, avg=27794.00, stdev=5565.98, samples=4 00:32:57.334 iops : min= 4862, max= 7676, avg=6948.50, stdev=1391.49, samples=4 00:32:57.334 write: IOPS=6964, BW=27.2MiB/s (28.5MB/s)(54.6MiB/2007msec); 0 zone resets 00:32:57.334 slat (nsec): min=1735, max=97787, avg=1948.63, stdev=963.27 00:32:57.334 clat (usec): min=310, max=168856, avg=8207.89, stdev=10229.53 00:32:57.334 lat (usec): min=312, max=168861, avg=8209.83, stdev=10229.77 00:32:57.334 clat percentiles (msec): 00:32:57.334 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 8], 00:32:57.334 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 8], 60.00th=[ 8], 00:32:57.334 | 70.00th=[ 8], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 9], 00:32:57.334 | 99.00th=[ 10], 99.50th=[ 13], 99.90th=[ 169], 99.95th=[ 169], 00:32:57.334 | 99.99th=[ 169] 00:32:57.334 bw ( KiB/s): min=20392, max=30528, per=99.91%, avg=27834.00, stdev=4963.63, samples=4 00:32:57.334 iops : min= 5098, max= 7632, avg=6958.50, stdev=1240.91, samples=4 00:32:57.334 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:32:57.334 lat (msec) : 2=0.04%, 4=0.22%, 10=90.32%, 20=8.94%, 250=0.46% 00:32:57.334 cpu : usr=73.78%, sys=25.12%, ctx=111, majf=0, minf=1504 00:32:57.334 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:57.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.334 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:57.334 issued rwts: total=13963,13978,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.334 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:57.334 00:32:57.334 Run status group 0 (all jobs): 00:32:57.334 READ: bw=27.2MiB/s (28.5MB/s), 27.2MiB/s-27.2MiB/s (28.5MB/s-28.5MB/s), io=54.5MiB (57.2MB), run=2007-2007msec 00:32:57.334 WRITE: bw=27.2MiB/s (28.5MB/s), 27.2MiB/s-27.2MiB/s (28.5MB/s-28.5MB/s), io=54.6MiB (57.3MB), run=2007-2007msec 00:32:57.593 ----------------------------------------------------- 00:32:57.593 Suppressions used: 00:32:57.593 count bytes template 00:32:57.593 1 58 /usr/src/fio/parse.c 00:32:57.593 1 8 libtcmalloc_minimal.so 00:32:57.593 ----------------------------------------------------- 00:32:57.593 00:32:57.593 03:43:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:57.852 03:43:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:32:59.229 03:44:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=d661a3b1-61b4-42c0-b249-9e23e3d1711c 00:32:59.229 03:44:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb d661a3b1-61b4-42c0-b249-9e23e3d1711c 00:32:59.229 03:44:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=d661a3b1-61b4-42c0-b249-9e23e3d1711c 00:32:59.229 03:44:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:59.229 03:44:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:32:59.229 03:44:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:32:59.229 03:44:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:59.229 03:44:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:59.229 { 00:32:59.229 "uuid": "b8a635af-aa76-4253-9bf6-380bec39d666", 00:32:59.229 "name": "lvs_0", 00:32:59.229 "base_bdev": "Nvme0n1", 00:32:59.230 "total_data_clusters": 930, 00:32:59.230 "free_clusters": 0, 00:32:59.230 "block_size": 512, 00:32:59.230 "cluster_size": 1073741824 00:32:59.230 }, 00:32:59.230 { 00:32:59.230 "uuid": "d661a3b1-61b4-42c0-b249-9e23e3d1711c", 00:32:59.230 "name": "lvs_n_0", 00:32:59.230 "base_bdev": "549ad317-ce18-426b-bb2e-df45ad55485e", 00:32:59.230 "total_data_clusters": 237847, 00:32:59.230 "free_clusters": 237847, 00:32:59.230 "block_size": 512, 00:32:59.230 "cluster_size": 4194304 00:32:59.230 } 00:32:59.230 ]' 00:32:59.230 03:44:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="d661a3b1-61b4-42c0-b249-9e23e3d1711c") .free_clusters' 00:32:59.230 03:44:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=237847 00:32:59.230 03:44:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="d661a3b1-61b4-42c0-b249-9e23e3d1711c") .cluster_size' 00:32:59.230 03:44:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:32:59.230 03:44:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=951388 00:32:59.230 03:44:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 951388 00:32:59.230 951388 00:32:59.230 03:44:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:33:00.166 ad5fae22-e2c1-4dcd-bf3a-c4490e01b5ac 00:33:00.166 03:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:33:00.426 03:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:33:00.426 03:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:33:00.685 03:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:00.685 03:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:00.685 03:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:00.685 03:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:00.685 03:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:00.685 03:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:00.685 03:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:33:00.685 03:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:00.685 03:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:00.685 03:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:00.685 03:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:33:00.685 03:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:00.685 03:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:00.685 03:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:00.685 03:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:33:00.685 03:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:00.685 03:44:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:01.250 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:01.250 fio-3.35 00:33:01.250 Starting 1 thread 00:33:03.783 00:33:03.783 test: (groupid=0, jobs=1): err= 0: pid=2845170: Fri Dec 13 03:44:04 2024 00:33:03.783 read: IOPS=6760, BW=26.4MiB/s (27.7MB/s)(53.0MiB/2008msec) 00:33:03.783 slat (nsec): min=1701, max=115881, avg=1911.75, stdev=1380.86 00:33:03.783 clat (usec): min=3607, max=16793, avg=10382.18, stdev=923.97 00:33:03.783 lat (usec): min=3610, max=16795, avg=10384.09, stdev=923.89 00:33:03.783 clat percentiles (usec): 00:33:03.783 | 1.00th=[ 8225], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9634], 00:33:03.783 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[10683], 00:33:03.783 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11469], 95.00th=[11731], 00:33:03.783 | 99.00th=[12387], 99.50th=[12649], 99.90th=[15533], 99.95th=[15795], 00:33:03.783 | 99.99th=[16581] 00:33:03.783 bw ( KiB/s): min=25744, max=27528, per=99.83%, avg=26996.00, stdev=848.21, samples=4 00:33:03.783 iops : min= 6436, max= 6882, avg=6749.00, stdev=212.05, samples=4 00:33:03.783 write: IOPS=6758, BW=26.4MiB/s (27.7MB/s)(53.0MiB/2008msec); 0 zone resets 00:33:03.783 slat (nsec): min=1737, max=82841, avg=1964.33, stdev=867.83 00:33:03.783 clat (usec): min=1651, max=15616, avg=8401.44, stdev=758.60 00:33:03.783 lat (usec): min=1657, max=15619, avg=8403.41, stdev=758.54 00:33:03.783 clat percentiles (usec): 00:33:03.783 | 1.00th=[ 6652], 5.00th=[ 7242], 10.00th=[ 7504], 20.00th=[ 7832], 00:33:03.783 | 30.00th=[ 8029], 40.00th=[ 8225], 50.00th=[ 8455], 60.00th=[ 8586], 00:33:03.783 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9241], 95.00th=[ 9503], 00:33:03.783 | 99.00th=[10028], 99.50th=[10421], 99.90th=[13304], 99.95th=[14222], 00:33:03.783 | 99.99th=[15533] 00:33:03.783 bw ( KiB/s): min=26704, max=27384, per=100.00%, avg=27044.00, stdev=279.16, samples=4 00:33:03.783 iops : min= 6676, max= 6846, avg=6761.00, stdev=69.79, samples=4 00:33:03.783 lat (msec) : 2=0.01%, 4=0.08%, 10=65.62%, 20=34.29% 00:33:03.783 cpu : usr=73.74%, sys=25.21%, ctx=109, majf=0, minf=1504 00:33:03.783 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:03.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:03.783 issued rwts: total=13575,13572,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:03.783 00:33:03.783 Run status group 0 (all jobs): 00:33:03.783 READ: bw=26.4MiB/s (27.7MB/s), 26.4MiB/s-26.4MiB/s (27.7MB/s-27.7MB/s), io=53.0MiB (55.6MB), run=2008-2008msec 00:33:03.783 WRITE: bw=26.4MiB/s (27.7MB/s), 26.4MiB/s-26.4MiB/s (27.7MB/s-27.7MB/s), io=53.0MiB (55.6MB), run=2008-2008msec 00:33:03.783 ----------------------------------------------------- 00:33:03.783 Suppressions used: 00:33:03.783 count bytes template 00:33:03.783 1 58 /usr/src/fio/parse.c 00:33:03.783 1 8 libtcmalloc_minimal.so 00:33:03.783 ----------------------------------------------------- 00:33:03.783 00:33:03.783 03:44:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:33:04.042 03:44:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:33:04.042 03:44:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:33:08.233 03:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:08.233 03:44:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:33:11.522 03:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:11.522 03:44:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:33:13.427 03:44:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:13.427 03:44:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:33:13.427 03:44:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:33:13.427 03:44:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:13.427 03:44:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:33:13.427 03:44:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:13.427 03:44:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:33:13.427 03:44:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:13.427 03:44:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:13.427 rmmod nvme_tcp 00:33:13.427 rmmod nvme_fabrics 00:33:13.427 rmmod nvme_keyring 00:33:13.427 03:44:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:13.427 03:44:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:33:13.427 03:44:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:33:13.427 03:44:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2841113 ']' 00:33:13.427 03:44:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2841113 00:33:13.427 03:44:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2841113 ']' 00:33:13.427 03:44:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2841113 00:33:13.427 03:44:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:33:13.427 03:44:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:13.427 03:44:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2841113 00:33:13.427 03:44:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:13.427 03:44:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:13.427 03:44:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2841113' 00:33:13.427 killing process with pid 2841113 00:33:13.427 03:44:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2841113 00:33:13.428 03:44:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2841113 00:33:14.806 03:44:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:14.806 03:44:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:14.806 03:44:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:14.806 03:44:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:33:14.806 03:44:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:14.806 03:44:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:33:14.806 03:44:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:33:14.806 03:44:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:14.806 03:44:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:14.806 03:44:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:14.806 03:44:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:14.806 03:44:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:16.711 03:44:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:16.711 00:33:16.711 real 0m43.426s 00:33:16.711 user 2m54.988s 00:33:16.711 sys 0m10.046s 00:33:16.711 03:44:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:16.711 03:44:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.711 ************************************ 00:33:16.711 END TEST nvmf_fio_host 00:33:16.711 ************************************ 00:33:16.711 03:44:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:16.711 03:44:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:16.711 03:44:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:16.711 03:44:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.711 ************************************ 00:33:16.711 START TEST nvmf_failover 00:33:16.711 ************************************ 00:33:16.711 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:16.711 * Looking for test storage... 00:33:16.711 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:16.711 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:16.972 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:33:16.972 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:16.972 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:16.972 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:16.972 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:16.972 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:16.972 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:33:16.972 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:33:16.972 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:33:16.972 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:33:16.972 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:33:16.972 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:33:16.972 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:33:16.972 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:16.972 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:33:16.972 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:33:16.972 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:16.972 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:16.972 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:33:16.972 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:33:16.972 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:16.972 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:33:16.972 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:33:16.972 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:33:16.972 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:33:16.972 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:16.972 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:33:16.972 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:33:16.972 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:16.972 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:16.972 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:33:16.972 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:16.972 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:16.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:16.972 --rc genhtml_branch_coverage=1 00:33:16.972 --rc genhtml_function_coverage=1 00:33:16.972 --rc genhtml_legend=1 00:33:16.972 --rc geninfo_all_blocks=1 00:33:16.972 --rc geninfo_unexecuted_blocks=1 00:33:16.972 00:33:16.972 ' 00:33:16.972 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:16.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:16.972 --rc genhtml_branch_coverage=1 00:33:16.972 --rc genhtml_function_coverage=1 00:33:16.972 --rc genhtml_legend=1 00:33:16.972 --rc geninfo_all_blocks=1 00:33:16.972 --rc geninfo_unexecuted_blocks=1 00:33:16.972 00:33:16.972 ' 00:33:16.972 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:16.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:16.972 --rc genhtml_branch_coverage=1 00:33:16.972 --rc genhtml_function_coverage=1 00:33:16.972 --rc genhtml_legend=1 00:33:16.972 --rc geninfo_all_blocks=1 00:33:16.972 --rc geninfo_unexecuted_blocks=1 00:33:16.972 00:33:16.972 ' 00:33:16.972 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:16.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:16.972 --rc genhtml_branch_coverage=1 00:33:16.972 --rc genhtml_function_coverage=1 00:33:16.972 --rc genhtml_legend=1 00:33:16.972 --rc geninfo_all_blocks=1 00:33:16.972 --rc geninfo_unexecuted_blocks=1 00:33:16.972 00:33:16.972 ' 00:33:16.972 03:44:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:16.972 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:33:16.972 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:16.972 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:16.972 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:16.972 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:16.972 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:16.972 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:16.972 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:16.972 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:16.972 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:16.972 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:16.972 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:16.972 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:16.972 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:16.972 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:16.972 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:16.972 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:16.972 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:16.972 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:33:16.972 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:16.972 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:16.972 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:16.972 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.972 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.972 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.972 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:33:16.972 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.972 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:33:16.972 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:16.972 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:16.972 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:16.972 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:16.972 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:16.972 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:16.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:16.972 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:16.973 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:16.973 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:16.973 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:16.973 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:16.973 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:16.973 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:16.973 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:33:16.973 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:16.973 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:16.973 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:16.973 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:16.973 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:16.973 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:16.973 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:16.973 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:16.973 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:16.973 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:16.973 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:33:16.973 03:44:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:22.248 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:22.248 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:22.248 Found net devices under 0000:af:00.0: cvl_0_0 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:22.248 Found net devices under 0000:af:00.1: cvl_0_1 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:22.248 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:22.249 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:22.249 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:22.249 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:22.249 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:22.249 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:22.249 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:22.249 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:22.249 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:22.249 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:22.249 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:22.249 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:22.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:22.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:33:22.249 00:33:22.249 --- 10.0.0.2 ping statistics --- 00:33:22.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:22.249 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:33:22.249 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:22.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:22.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:33:22.249 00:33:22.249 --- 10.0.0.1 ping statistics --- 00:33:22.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:22.249 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:33:22.249 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:22.249 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:33:22.249 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:22.249 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:22.249 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:22.249 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:22.249 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:22.249 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:22.249 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:22.249 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:33:22.249 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:22.249 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:22.249 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:22.249 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2850629 00:33:22.249 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2850629 00:33:22.249 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2850629 ']' 00:33:22.249 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:22.249 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:22.249 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:22.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:22.249 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:22.249 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:22.249 03:44:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:22.508 [2024-12-13 03:44:23.515609] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:33:22.508 [2024-12-13 03:44:23.515696] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:22.508 [2024-12-13 03:44:23.631359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:22.767 [2024-12-13 03:44:23.737054] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:22.767 [2024-12-13 03:44:23.737100] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:22.767 [2024-12-13 03:44:23.737110] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:22.767 [2024-12-13 03:44:23.737121] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:22.767 [2024-12-13 03:44:23.737128] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:22.767 [2024-12-13 03:44:23.739306] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:33:22.767 [2024-12-13 03:44:23.739408] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:22.767 [2024-12-13 03:44:23.739415] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:33:23.335 03:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:23.335 03:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:23.335 03:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:23.335 03:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:23.335 03:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:23.335 03:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:23.335 03:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:23.335 [2024-12-13 03:44:24.534003] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:23.594 03:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:23.853 Malloc0 00:33:23.853 03:44:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:23.853 03:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:24.115 03:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:24.374 [2024-12-13 03:44:25.376220] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:24.374 03:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:24.374 [2024-12-13 03:44:25.576803] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:24.633 03:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:24.633 [2024-12-13 03:44:25.785524] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:24.633 03:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2851073 00:33:24.633 03:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:33:24.633 03:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:24.633 03:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2851073 /var/tmp/bdevperf.sock 00:33:24.633 03:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2851073 ']' 00:33:24.633 03:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:24.633 03:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:24.633 03:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:24.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:24.633 03:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:24.633 03:44:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:25.570 03:44:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:25.570 03:44:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:25.570 03:44:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:26.137 NVMe0n1 00:33:26.137 03:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:26.395 00:33:26.395 03:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2851329 00:33:26.396 03:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:26.396 03:44:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:33:27.404 03:44:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:27.686 03:44:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:33:31.041 03:44:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:31.041 00:33:31.041 03:44:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:31.300 [2024-12-13 03:44:32.267461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:31.300 [2024-12-13 03:44:32.267515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:31.300 [2024-12-13 03:44:32.267526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:31.300 [2024-12-13 03:44:32.267535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:31.300 [2024-12-13 03:44:32.267544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:31.300 [2024-12-13 03:44:32.267552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:31.300 [2024-12-13 03:44:32.267560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:31.300 [2024-12-13 03:44:32.267568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:31.300 [2024-12-13 03:44:32.267576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:31.300 03:44:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:33:34.591 03:44:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:34.591 [2024-12-13 03:44:35.488186] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:34.591 03:44:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:33:35.528 03:44:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:35.528 [2024-12-13 03:44:36.708614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:35.528 [2024-12-13 03:44:36.708657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:35.528 [2024-12-13 03:44:36.708667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:35.528 [2024-12-13 03:44:36.708676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:35.528 [2024-12-13 03:44:36.708685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:35.528 [2024-12-13 03:44:36.708693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:35.528 [2024-12-13 03:44:36.708702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:35.528 [2024-12-13 03:44:36.708710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:35.528 [2024-12-13 03:44:36.708718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:35.528 [2024-12-13 03:44:36.708727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:35.528 [2024-12-13 03:44:36.708736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:35.528 [2024-12-13 03:44:36.708744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:35.528 [2024-12-13 03:44:36.708752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:35.528 [2024-12-13 03:44:36.708760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:35.528 [2024-12-13 03:44:36.708768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:35.528 [2024-12-13 03:44:36.708776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:35.528 [2024-12-13 03:44:36.708783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:35.528 [2024-12-13 03:44:36.708791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:35.528 [2024-12-13 03:44:36.708799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:35.528 [2024-12-13 03:44:36.708807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:35.528 [2024-12-13 03:44:36.708815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:35.528 [2024-12-13 03:44:36.708822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:35.528 [2024-12-13 03:44:36.708830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:35.528 [2024-12-13 03:44:36.708838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:35.528 [2024-12-13 03:44:36.708846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:35.787 03:44:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2851329 00:33:42.355 { 00:33:42.355 "results": [ 00:33:42.355 { 00:33:42.355 "job": "NVMe0n1", 00:33:42.355 "core_mask": "0x1", 00:33:42.355 "workload": "verify", 00:33:42.355 "status": "finished", 00:33:42.355 "verify_range": { 00:33:42.355 "start": 0, 00:33:42.355 "length": 16384 00:33:42.355 }, 00:33:42.355 "queue_depth": 128, 00:33:42.355 "io_size": 4096, 00:33:42.355 "runtime": 15.010858, 00:33:42.355 "iops": 9584.395508904288, 00:33:42.355 "mibps": 37.439044956657376, 00:33:42.355 "io_failed": 9853, 00:33:42.355 "io_timeout": 0, 00:33:42.355 "avg_latency_us": 12473.931655448281, 00:33:42.355 "min_latency_us": 477.8666666666667, 00:33:42.355 "max_latency_us": 22219.82476190476 00:33:42.355 } 00:33:42.355 ], 00:33:42.355 "core_count": 1 00:33:42.355 } 00:33:42.355 03:44:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2851073 00:33:42.355 03:44:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2851073 ']' 00:33:42.355 03:44:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2851073 00:33:42.355 03:44:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:42.355 03:44:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:42.355 03:44:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2851073 00:33:42.355 03:44:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:42.355 03:44:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:42.355 03:44:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2851073' 00:33:42.355 killing process with pid 2851073 00:33:42.355 03:44:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2851073 00:33:42.355 03:44:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2851073 00:33:42.622 03:44:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:42.622 [2024-12-13 03:44:25.890990] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:33:42.622 [2024-12-13 03:44:25.891100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2851073 ] 00:33:42.622 [2024-12-13 03:44:26.003542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:42.622 [2024-12-13 03:44:26.118391] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:42.622 Running I/O for 15 seconds... 00:33:42.622 9553.00 IOPS, 37.32 MiB/s [2024-12-13T02:44:43.831Z] [2024-12-13 03:44:28.719163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.622 [2024-12-13 03:44:28.719219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.622 [2024-12-13 03:44:28.719250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:85088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.622 [2024-12-13 03:44:28.719261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.622 [2024-12-13 03:44:28.719274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.622 [2024-12-13 03:44:28.719285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.622 [2024-12-13 03:44:28.719296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.622 [2024-12-13 03:44:28.719307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.622 [2024-12-13 03:44:28.719319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:85112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.622 [2024-12-13 03:44:28.719328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.622 [2024-12-13 03:44:28.719340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:85120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.622 [2024-12-13 03:44:28.719350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.622 [2024-12-13 03:44:28.719361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.622 [2024-12-13 03:44:28.719372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.622 [2024-12-13 03:44:28.719383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:85136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.622 [2024-12-13 03:44:28.719394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.622 [2024-12-13 03:44:28.719406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:85144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.622 [2024-12-13 03:44:28.719418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.622 [2024-12-13 03:44:28.719429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:85152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.622 [2024-12-13 03:44:28.719439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.622 [2024-12-13 03:44:28.719451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:85160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.622 [2024-12-13 03:44:28.719460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.622 [2024-12-13 03:44:28.719481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.622 [2024-12-13 03:44:28.719492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.622 [2024-12-13 03:44:28.719504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:85176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.622 [2024-12-13 03:44:28.719514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.622 [2024-12-13 03:44:28.719537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:85184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.622 [2024-12-13 03:44:28.719547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.622 [2024-12-13 03:44:28.719559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:85192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.622 [2024-12-13 03:44:28.719569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.622 [2024-12-13 03:44:28.719581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.622 [2024-12-13 03:44:28.719590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.622 [2024-12-13 03:44:28.719602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:85208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.622 [2024-12-13 03:44:28.719615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.622 [2024-12-13 03:44:28.719627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.622 [2024-12-13 03:44:28.719636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.622 [2024-12-13 03:44:28.719647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.622 [2024-12-13 03:44:28.719657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.622 [2024-12-13 03:44:28.719669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.622 [2024-12-13 03:44:28.719678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.622 [2024-12-13 03:44:28.719689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:85240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.622 [2024-12-13 03:44:28.719699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.622 [2024-12-13 03:44:28.719710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:85248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.622 [2024-12-13 03:44:28.719720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.622 [2024-12-13 03:44:28.719731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.622 [2024-12-13 03:44:28.719740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.622 [2024-12-13 03:44:28.719752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.622 [2024-12-13 03:44:28.719762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.622 [2024-12-13 03:44:28.719774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:85272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.622 [2024-12-13 03:44:28.719783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.622 [2024-12-13 03:44:28.719794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.622 [2024-12-13 03:44:28.719804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.622 [2024-12-13 03:44:28.719816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.622 [2024-12-13 03:44:28.719825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.622 [2024-12-13 03:44:28.719836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.622 [2024-12-13 03:44:28.719846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.622 [2024-12-13 03:44:28.719857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.622 [2024-12-13 03:44:28.719866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.622 [2024-12-13 03:44:28.719877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:85312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.622 [2024-12-13 03:44:28.719887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.622 [2024-12-13 03:44:28.719898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:85320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.622 [2024-12-13 03:44:28.719907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.622 [2024-12-13 03:44:28.719925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:85328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.622 [2024-12-13 03:44:28.719934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.622 [2024-12-13 03:44:28.719947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.622 [2024-12-13 03:44:28.719957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.622 [2024-12-13 03:44:28.719970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:85344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.623 [2024-12-13 03:44:28.719980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.719991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.623 [2024-12-13 03:44:28.720000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.623 [2024-12-13 03:44:28.720022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.623 [2024-12-13 03:44:28.720045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:85376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.623 [2024-12-13 03:44:28.720065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.623 [2024-12-13 03:44:28.720086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.623 [2024-12-13 03:44:28.720106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.623 [2024-12-13 03:44:28.720127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.623 [2024-12-13 03:44:28.720148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:85416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.623 [2024-12-13 03:44:28.720168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.623 [2024-12-13 03:44:28.720189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.623 [2024-12-13 03:44:28.720210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:84688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.623 [2024-12-13 03:44:28.720231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.623 [2024-12-13 03:44:28.720253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:84704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.623 [2024-12-13 03:44:28.720273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:84712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.623 [2024-12-13 03:44:28.720294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:84720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.623 [2024-12-13 03:44:28.720317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:84728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.623 [2024-12-13 03:44:28.720337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:84736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.623 [2024-12-13 03:44:28.720359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:84744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.623 [2024-12-13 03:44:28.720380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:84752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.623 [2024-12-13 03:44:28.720401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:84760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.623 [2024-12-13 03:44:28.720422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.623 [2024-12-13 03:44:28.720443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:84776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.623 [2024-12-13 03:44:28.720466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:84784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.623 [2024-12-13 03:44:28.720486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.623 [2024-12-13 03:44:28.720507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.623 [2024-12-13 03:44:28.720528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.623 [2024-12-13 03:44:28.720549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.623 [2024-12-13 03:44:28.720571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.623 [2024-12-13 03:44:28.720592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.623 [2024-12-13 03:44:28.720614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.623 [2024-12-13 03:44:28.720635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.623 [2024-12-13 03:44:28.720656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.623 [2024-12-13 03:44:28.720678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.623 [2024-12-13 03:44:28.720698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.623 [2024-12-13 03:44:28.720719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.623 [2024-12-13 03:44:28.720740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.623 [2024-12-13 03:44:28.720760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.623 [2024-12-13 03:44:28.720782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.623 [2024-12-13 03:44:28.720802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.623 [2024-12-13 03:44:28.720813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.623 [2024-12-13 03:44:28.720822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.720835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.624 [2024-12-13 03:44:28.720845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.720856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.624 [2024-12-13 03:44:28.720865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.720877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.624 [2024-12-13 03:44:28.720887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.720906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.624 [2024-12-13 03:44:28.720916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.720931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.624 [2024-12-13 03:44:28.720942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.720953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.624 [2024-12-13 03:44:28.720964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.720975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.624 [2024-12-13 03:44:28.720984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.720997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.624 [2024-12-13 03:44:28.721006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.721018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.624 [2024-12-13 03:44:28.721027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.721038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.624 [2024-12-13 03:44:28.721048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.721060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.624 [2024-12-13 03:44:28.721069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.721080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.624 [2024-12-13 03:44:28.721090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.721102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:85648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.624 [2024-12-13 03:44:28.721113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.721132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.624 [2024-12-13 03:44:28.721142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.721153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.624 [2024-12-13 03:44:28.721163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.721174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.624 [2024-12-13 03:44:28.721183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.721195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.624 [2024-12-13 03:44:28.721205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.721216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.624 [2024-12-13 03:44:28.721226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.721236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.624 [2024-12-13 03:44:28.721245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.721257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.624 [2024-12-13 03:44:28.721267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.721278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.624 [2024-12-13 03:44:28.721287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.721298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:84816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.624 [2024-12-13 03:44:28.721308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.721320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.624 [2024-12-13 03:44:28.721329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.721342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:84832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.624 [2024-12-13 03:44:28.721351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.721363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.624 [2024-12-13 03:44:28.721372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.721383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:84848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.624 [2024-12-13 03:44:28.721393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.721404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:84856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.624 [2024-12-13 03:44:28.721414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.721425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:84864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.624 [2024-12-13 03:44:28.721434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.721445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:84872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.624 [2024-12-13 03:44:28.721454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.721467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:84880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.624 [2024-12-13 03:44:28.721476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.721487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:84888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.624 [2024-12-13 03:44:28.721496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.721507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:84896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.624 [2024-12-13 03:44:28.721517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.721529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.624 [2024-12-13 03:44:28.721538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.721548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.624 [2024-12-13 03:44:28.721557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.721569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:84920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.624 [2024-12-13 03:44:28.721578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.721589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.624 [2024-12-13 03:44:28.721598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.721609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:84936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.624 [2024-12-13 03:44:28.721619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.624 [2024-12-13 03:44:28.721630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:84944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.624 [2024-12-13 03:44:28.721640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.625 [2024-12-13 03:44:28.721652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:84952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.625 [2024-12-13 03:44:28.721661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.625 [2024-12-13 03:44:28.721674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:84960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.625 [2024-12-13 03:44:28.721683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.625 [2024-12-13 03:44:28.721694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:84968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.625 [2024-12-13 03:44:28.721704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.625 [2024-12-13 03:44:28.721714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.625 [2024-12-13 03:44:28.721724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.625 [2024-12-13 03:44:28.721735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:84984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.625 [2024-12-13 03:44:28.721745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.625 [2024-12-13 03:44:28.721756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:84992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.625 [2024-12-13 03:44:28.721765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.625 [2024-12-13 03:44:28.721775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:85000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.625 [2024-12-13 03:44:28.721785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.625 [2024-12-13 03:44:28.721797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.625 [2024-12-13 03:44:28.721807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.625 [2024-12-13 03:44:28.721817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:85016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.625 [2024-12-13 03:44:28.721827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.625 [2024-12-13 03:44:28.721838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:85024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.625 [2024-12-13 03:44:28.721847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.625 [2024-12-13 03:44:28.721858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:85032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.625 [2024-12-13 03:44:28.721867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.625 [2024-12-13 03:44:28.721878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.625 [2024-12-13 03:44:28.721888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.625 [2024-12-13 03:44:28.721899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:85048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.625 [2024-12-13 03:44:28.721910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.625 [2024-12-13 03:44:28.721925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:85056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.625 [2024-12-13 03:44:28.721934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.625 [2024-12-13 03:44:28.721945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:85064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.625 [2024-12-13 03:44:28.721955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.625 [2024-12-13 03:44:28.721966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000326480 is same with the state(6) to be set 00:33:42.625 [2024-12-13 03:44:28.721979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.625 [2024-12-13 03:44:28.721988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.625 [2024-12-13 03:44:28.721997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85072 len:8 PRP1 0x0 PRP2 0x0 00:33:42.625 [2024-12-13 03:44:28.722010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.625 [2024-12-13 03:44:28.722337] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:42.625 [2024-12-13 03:44:28.722373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.625 [2024-12-13 03:44:28.722388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.625 [2024-12-13 03:44:28.722399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.625 [2024-12-13 03:44:28.722408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.625 [2024-12-13 03:44:28.722418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.625 [2024-12-13 03:44:28.722428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.625 [2024-12-13 03:44:28.722438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.625 [2024-12-13 03:44:28.722447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.625 [2024-12-13 03:44:28.722460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:33:42.625 [2024-12-13 03:44:28.725522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:33:42.625 [2024-12-13 03:44:28.725565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325580 (9): Bad file descriptor 00:33:42.625 [2024-12-13 03:44:28.756951] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:33:42.625 9397.50 IOPS, 36.71 MiB/s [2024-12-13T02:44:43.834Z] 9512.33 IOPS, 37.16 MiB/s [2024-12-13T02:44:43.834Z] 9614.25 IOPS, 37.56 MiB/s [2024-12-13T02:44:43.834Z] [2024-12-13 03:44:32.267798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.625 [2024-12-13 03:44:32.267853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.625 [2024-12-13 03:44:32.267878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:107080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.625 [2024-12-13 03:44:32.267900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.625 [2024-12-13 03:44:32.267913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:107088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.625 [2024-12-13 03:44:32.267930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.625 [2024-12-13 03:44:32.267942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:107096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.625 [2024-12-13 03:44:32.267952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.625 [2024-12-13 03:44:32.267963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:107104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.625 [2024-12-13 03:44:32.267972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.625 [2024-12-13 03:44:32.267984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:107112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.625 [2024-12-13 03:44:32.267993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.625 [2024-12-13 03:44:32.268009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:107120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.625 [2024-12-13 03:44:32.268019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.625 [2024-12-13 03:44:32.268031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.625 [2024-12-13 03:44:32.268041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.625 [2024-12-13 03:44:32.268052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:107136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.625 [2024-12-13 03:44:32.268062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.625 [2024-12-13 03:44:32.268073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.625 [2024-12-13 03:44:32.268083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.625 [2024-12-13 03:44:32.268094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:107152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.625 [2024-12-13 03:44:32.268103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.625 [2024-12-13 03:44:32.268114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:107160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.625 [2024-12-13 03:44:32.268124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.625 [2024-12-13 03:44:32.268135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:107168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.625 [2024-12-13 03:44:32.268144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.625 [2024-12-13 03:44:32.268155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:107176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.625 [2024-12-13 03:44:32.268164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.625 [2024-12-13 03:44:32.268177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:107184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.626 [2024-12-13 03:44:32.268187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.626 [2024-12-13 03:44:32.268198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.626 [2024-12-13 03:44:32.268207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.626 [2024-12-13 03:44:32.268218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:107200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.626 [2024-12-13 03:44:32.268229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.626 [2024-12-13 03:44:32.268241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.626 [2024-12-13 03:44:32.268250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.626 [2024-12-13 03:44:32.268262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:107216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.626 [2024-12-13 03:44:32.268271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.626 [2024-12-13 03:44:32.268282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:107224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.626 [2024-12-13 03:44:32.268292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.626 [2024-12-13 03:44:32.268304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:107232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.626 [2024-12-13 03:44:32.268313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.626 [2024-12-13 03:44:32.268324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:107240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.626 [2024-12-13 03:44:32.268333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.626 [2024-12-13 03:44:32.268345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.626 [2024-12-13 03:44:32.268355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.626 [2024-12-13 03:44:32.268366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:107256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.626 [2024-12-13 03:44:32.268375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.626 [2024-12-13 03:44:32.268388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:107264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.626 [2024-12-13 03:44:32.268398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.626 [2024-12-13 03:44:32.268409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:107272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.626 [2024-12-13 03:44:32.268419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.626 [2024-12-13 03:44:32.268430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:107280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.626 [2024-12-13 03:44:32.268441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.626 [2024-12-13 03:44:32.268452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:107288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.626 [2024-12-13 03:44:32.268462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.626 [2024-12-13 03:44:32.268473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:107296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.626 [2024-12-13 03:44:32.268482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.626 [2024-12-13 03:44:32.268493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.626 [2024-12-13 03:44:32.268503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.626 [2024-12-13 03:44:32.268514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:107312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.626 [2024-12-13 03:44:32.268523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.626 [2024-12-13 03:44:32.268534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:107320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.626 [2024-12-13 03:44:32.268543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.626 [2024-12-13 03:44:32.268555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:107344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.626 [2024-12-13 03:44:32.268565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.626 [2024-12-13 03:44:32.268576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:107352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.626 [2024-12-13 03:44:32.268586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.626 [2024-12-13 03:44:32.268597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.626 [2024-12-13 03:44:32.268606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.626 [2024-12-13 03:44:32.268618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.626 [2024-12-13 03:44:32.268628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.626 [2024-12-13 03:44:32.268639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:107376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.626 [2024-12-13 03:44:32.268649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.626 [2024-12-13 03:44:32.268662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.626 [2024-12-13 03:44:32.268672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.626 [2024-12-13 03:44:32.268683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.626 [2024-12-13 03:44:32.268692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.626 [2024-12-13 03:44:32.268706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:107400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.626 [2024-12-13 03:44:32.268718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.626 [2024-12-13 03:44:32.268730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:107408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.626 [2024-12-13 03:44:32.268740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.626 [2024-12-13 03:44:32.268753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:107416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.626 [2024-12-13 03:44:32.268764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.626 [2024-12-13 03:44:32.268775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:107424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.626 [2024-12-13 03:44:32.268786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.626 [2024-12-13 03:44:32.268798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:107432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.626 [2024-12-13 03:44:32.268808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.626 [2024-12-13 03:44:32.268818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:107440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.626 [2024-12-13 03:44:32.268828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.626 [2024-12-13 03:44:32.268839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:107448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.626 [2024-12-13 03:44:32.268849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.626 [2024-12-13 03:44:32.268860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:107456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.626 [2024-12-13 03:44:32.268869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.626 [2024-12-13 03:44:32.268880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:107464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.626 [2024-12-13 03:44:32.268889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.268900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:107472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.268910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.268926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.268936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.268948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:107488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.268957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.268968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.268979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.268990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:107504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.269001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.269013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:107512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.269022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.269033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:107520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.269042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.269054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.269064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.269074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.269084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.269094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:107544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.269104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.269115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:107552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.269124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.269135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.269144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.269156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:107568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.269165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.269176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:107576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.269185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.269195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:107584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.269205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.269216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:107592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.269226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.269236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.269248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.269261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.269275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.269290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.269300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.269312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:107624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.269322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.269333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:107632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.269342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.269353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:107640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.269363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.269374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:107648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.269384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.269395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:107656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.269404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.269415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:107664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.269425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.269436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:107672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.269446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.269457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:107680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.269467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.269478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:107688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.269488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.269498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:107696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.269508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.269521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.269531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.269541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:107712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.269550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.269561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:107720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.269572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.269583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:107728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.269593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.269604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:107736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.269614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.269627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:107744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.269636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.269647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.269656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.269669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.269679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.269690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.627 [2024-12-13 03:44:32.269699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.627 [2024-12-13 03:44:32.269710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:107776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.628 [2024-12-13 03:44:32.269720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.628 [2024-12-13 03:44:32.269731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:107784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.628 [2024-12-13 03:44:32.269740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.628 [2024-12-13 03:44:32.269751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:107792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.628 [2024-12-13 03:44:32.269760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.628 [2024-12-13 03:44:32.269772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:107800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.628 [2024-12-13 03:44:32.269783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.628 [2024-12-13 03:44:32.269795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.628 [2024-12-13 03:44:32.269804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.628 [2024-12-13 03:44:32.269815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.628 [2024-12-13 03:44:32.269826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.628 [2024-12-13 03:44:32.269837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.628 [2024-12-13 03:44:32.269847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.628 [2024-12-13 03:44:32.269857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.628 [2024-12-13 03:44:32.269866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.628 [2024-12-13 03:44:32.269878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.628 [2024-12-13 03:44:32.269888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.628 [2024-12-13 03:44:32.269898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.628 [2024-12-13 03:44:32.269908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.628 [2024-12-13 03:44:32.269922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.628 [2024-12-13 03:44:32.269932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.628 [2024-12-13 03:44:32.269943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.628 [2024-12-13 03:44:32.269953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.628 [2024-12-13 03:44:32.269965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.628 [2024-12-13 03:44:32.269975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.628 [2024-12-13 03:44:32.269986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.628 [2024-12-13 03:44:32.269995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.628 [2024-12-13 03:44:32.270006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.628 [2024-12-13 03:44:32.270016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.628 [2024-12-13 03:44:32.270027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.628 [2024-12-13 03:44:32.270037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.628 [2024-12-13 03:44:32.270049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.628 [2024-12-13 03:44:32.270059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.628 [2024-12-13 03:44:32.270070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.628 [2024-12-13 03:44:32.270079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.628 [2024-12-13 03:44:32.270090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.628 [2024-12-13 03:44:32.270099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.628 [2024-12-13 03:44:32.270110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.628 [2024-12-13 03:44:32.270120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.628 [2024-12-13 03:44:32.270131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.628 [2024-12-13 03:44:32.270140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.628 [2024-12-13 03:44:32.270152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.628 [2024-12-13 03:44:32.270161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.628 [2024-12-13 03:44:32.270171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.628 [2024-12-13 03:44:32.270181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.628 [2024-12-13 03:44:32.270191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.628 [2024-12-13 03:44:32.270201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.628 [2024-12-13 03:44:32.270212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.628 [2024-12-13 03:44:32.270221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.628 [2024-12-13 03:44:32.270232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.628 [2024-12-13 03:44:32.270242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.628 [2024-12-13 03:44:32.270287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.628 [2024-12-13 03:44:32.270300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107984 len:8 PRP1 0x0 PRP2 0x0 00:33:42.628 [2024-12-13 03:44:32.270311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.628 [2024-12-13 03:44:32.270325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.628 [2024-12-13 03:44:32.270333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.628 [2024-12-13 03:44:32.270344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107992 len:8 PRP1 0x0 PRP2 0x0 00:33:42.628 [2024-12-13 03:44:32.270355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.628 [2024-12-13 03:44:32.270372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.628 [2024-12-13 03:44:32.270379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.628 [2024-12-13 03:44:32.270388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108000 len:8 PRP1 0x0 PRP2 0x0 00:33:42.628 [2024-12-13 03:44:32.270397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.628 [2024-12-13 03:44:32.270406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.628 [2024-12-13 03:44:32.270414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.628 [2024-12-13 03:44:32.270422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108008 len:8 PRP1 0x0 PRP2 0x0 00:33:42.628 [2024-12-13 03:44:32.270431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.628 [2024-12-13 03:44:32.270440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.628 [2024-12-13 03:44:32.270448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.628 [2024-12-13 03:44:32.270456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108016 len:8 PRP1 0x0 PRP2 0x0 00:33:42.628 [2024-12-13 03:44:32.270465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.628 [2024-12-13 03:44:32.270474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.628 [2024-12-13 03:44:32.270481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.628 [2024-12-13 03:44:32.270489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108024 len:8 PRP1 0x0 PRP2 0x0 00:33:42.628 [2024-12-13 03:44:32.270498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.628 [2024-12-13 03:44:32.270508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.628 [2024-12-13 03:44:32.270515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.628 [2024-12-13 03:44:32.270523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108032 len:8 PRP1 0x0 PRP2 0x0 00:33:42.628 [2024-12-13 03:44:32.270531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.628 [2024-12-13 03:44:32.270540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.628 [2024-12-13 03:44:32.270548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.628 [2024-12-13 03:44:32.270556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108040 len:8 PRP1 0x0 PRP2 0x0 00:33:42.628 [2024-12-13 03:44:32.270565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.628 [2024-12-13 03:44:32.270574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.628 [2024-12-13 03:44:32.270580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.628 [2024-12-13 03:44:32.270588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108048 len:8 PRP1 0x0 PRP2 0x0 00:33:42.628 [2024-12-13 03:44:32.270598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.629 [2024-12-13 03:44:32.270611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.629 [2024-12-13 03:44:32.270619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.629 [2024-12-13 03:44:32.270628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108056 len:8 PRP1 0x0 PRP2 0x0 00:33:42.629 [2024-12-13 03:44:32.270638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.629 [2024-12-13 03:44:32.270648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.629 [2024-12-13 03:44:32.270656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.629 [2024-12-13 03:44:32.270665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108064 len:8 PRP1 0x0 PRP2 0x0 00:33:42.629 [2024-12-13 03:44:32.270674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.629 [2024-12-13 03:44:32.270683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.629 [2024-12-13 03:44:32.270690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.629 [2024-12-13 03:44:32.270698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108072 len:8 PRP1 0x0 PRP2 0x0 00:33:42.629 [2024-12-13 03:44:32.270707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.629 [2024-12-13 03:44:32.270717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.629 [2024-12-13 03:44:32.270724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.629 [2024-12-13 03:44:32.270732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108080 len:8 PRP1 0x0 PRP2 0x0 00:33:42.629 [2024-12-13 03:44:32.270740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.629 [2024-12-13 03:44:32.270749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.629 [2024-12-13 03:44:32.270756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.629 [2024-12-13 03:44:32.270765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108088 len:8 PRP1 0x0 PRP2 0x0 00:33:42.629 [2024-12-13 03:44:32.270773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.629 [2024-12-13 03:44:32.270782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.629 [2024-12-13 03:44:32.270789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.629 [2024-12-13 03:44:32.270797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107328 len:8 PRP1 0x0 PRP2 0x0 00:33:42.629 [2024-12-13 03:44:32.270806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.629 [2024-12-13 03:44:32.270815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.629 [2024-12-13 03:44:32.270822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.629 [2024-12-13 03:44:32.270830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107336 len:8 PRP1 0x0 PRP2 0x0 00:33:42.629 [2024-12-13 03:44:32.270839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.629 [2024-12-13 03:44:32.271121] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:33:42.629 [2024-12-13 03:44:32.271153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.629 [2024-12-13 03:44:32.271167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.629 [2024-12-13 03:44:32.271179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.629 [2024-12-13 03:44:32.271190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.629 [2024-12-13 03:44:32.271200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.629 [2024-12-13 03:44:32.271210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.629 [2024-12-13 03:44:32.271219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.629 [2024-12-13 03:44:32.271230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.629 [2024-12-13 03:44:32.271239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:33:42.629 [2024-12-13 03:44:32.271284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325580 (9): Bad file descriptor 00:33:42.629 [2024-12-13 03:44:32.274309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:33:42.629 [2024-12-13 03:44:32.343897] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:33:42.629 9468.20 IOPS, 36.99 MiB/s [2024-12-13T02:44:43.838Z] 9542.83 IOPS, 37.28 MiB/s [2024-12-13T02:44:43.838Z] 9577.71 IOPS, 37.41 MiB/s [2024-12-13T02:44:43.838Z] 9626.75 IOPS, 37.60 MiB/s [2024-12-13T02:44:43.838Z] 9635.89 IOPS, 37.64 MiB/s [2024-12-13T02:44:43.838Z] [2024-12-13 03:44:36.708201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.629 [2024-12-13 03:44:36.708260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.629 [2024-12-13 03:44:36.708274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.629 [2024-12-13 03:44:36.708285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.629 [2024-12-13 03:44:36.708295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.629 [2024-12-13 03:44:36.708305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.629 [2024-12-13 03:44:36.708315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.629 [2024-12-13 03:44:36.708325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.629 [2024-12-13 03:44:36.708334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325580 is same with the state(6) to be set 00:33:42.629 [2024-12-13 03:44:36.710934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.629 [2024-12-13 03:44:36.710964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.629 [2024-12-13 03:44:36.710985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.629 [2024-12-13 03:44:36.710996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.629 [2024-12-13 03:44:36.711008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.629 [2024-12-13 03:44:36.711019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.629 [2024-12-13 03:44:36.711030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.629 [2024-12-13 03:44:36.711045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.629 [2024-12-13 03:44:36.711056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.629 [2024-12-13 03:44:36.711066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.629 [2024-12-13 03:44:36.711077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.629 [2024-12-13 03:44:36.711087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.629 [2024-12-13 03:44:36.711100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.629 [2024-12-13 03:44:36.711110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.629 [2024-12-13 03:44:36.711122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.629 [2024-12-13 03:44:36.711131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.629 [2024-12-13 03:44:36.711143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.629 [2024-12-13 03:44:36.711153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.629 [2024-12-13 03:44:36.711164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:82032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.629 [2024-12-13 03:44:36.711174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.629 [2024-12-13 03:44:36.711186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.629 [2024-12-13 03:44:36.711195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.629 [2024-12-13 03:44:36.711207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.629 [2024-12-13 03:44:36.711218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.629 [2024-12-13 03:44:36.711230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.629 [2024-12-13 03:44:36.711241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.629 [2024-12-13 03:44:36.711252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.629 [2024-12-13 03:44:36.711263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.629 [2024-12-13 03:44:36.711274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.629 [2024-12-13 03:44:36.711284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.629 [2024-12-13 03:44:36.711295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.629 [2024-12-13 03:44:36.711305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.629 [2024-12-13 03:44:36.711319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.629 [2024-12-13 03:44:36.711328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.711339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.630 [2024-12-13 03:44:36.711350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.711362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.630 [2024-12-13 03:44:36.711371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.711383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.630 [2024-12-13 03:44:36.711392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.711404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.630 [2024-12-13 03:44:36.711414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.711426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.630 [2024-12-13 03:44:36.711435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.711446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.630 [2024-12-13 03:44:36.711455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.711466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.630 [2024-12-13 03:44:36.711476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.711487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.630 [2024-12-13 03:44:36.711497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.711508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.630 [2024-12-13 03:44:36.711518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.711530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.630 [2024-12-13 03:44:36.711539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.711550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.630 [2024-12-13 03:44:36.711559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.711570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.630 [2024-12-13 03:44:36.711582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.711594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.630 [2024-12-13 03:44:36.711603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.711614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.630 [2024-12-13 03:44:36.711624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.711636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.630 [2024-12-13 03:44:36.711645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.711656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.630 [2024-12-13 03:44:36.711665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.711677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.630 [2024-12-13 03:44:36.711687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.711699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.630 [2024-12-13 03:44:36.711708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.711719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.630 [2024-12-13 03:44:36.711729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.711740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.630 [2024-12-13 03:44:36.711749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.711760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.630 [2024-12-13 03:44:36.711769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.711780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.630 [2024-12-13 03:44:36.711791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.711839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.630 [2024-12-13 03:44:36.711850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.711861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.630 [2024-12-13 03:44:36.711870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.711881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.630 [2024-12-13 03:44:36.711893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.711904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.630 [2024-12-13 03:44:36.711914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.711930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.630 [2024-12-13 03:44:36.711940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.711951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.630 [2024-12-13 03:44:36.711961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.711972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.630 [2024-12-13 03:44:36.711982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.711992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.630 [2024-12-13 03:44:36.712002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.712014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.630 [2024-12-13 03:44:36.712023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.712034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.630 [2024-12-13 03:44:36.712043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.712055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.630 [2024-12-13 03:44:36.712064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.712075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.630 [2024-12-13 03:44:36.712084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.712099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.630 [2024-12-13 03:44:36.712109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.712120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.630 [2024-12-13 03:44:36.712130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.712141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.630 [2024-12-13 03:44:36.712150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.712163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.630 [2024-12-13 03:44:36.712173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.712184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.630 [2024-12-13 03:44:36.712194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.630 [2024-12-13 03:44:36.712204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.630 [2024-12-13 03:44:36.712215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.631 [2024-12-13 03:44:36.712226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.631 [2024-12-13 03:44:36.712235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.631 [2024-12-13 03:44:36.712246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.631 [2024-12-13 03:44:36.712255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.631 [2024-12-13 03:44:36.712268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.631 [2024-12-13 03:44:36.712277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.631 [2024-12-13 03:44:36.712288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.631 [2024-12-13 03:44:36.712297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.631 [2024-12-13 03:44:36.712307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.631 [2024-12-13 03:44:36.712318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.631 [2024-12-13 03:44:36.712328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.631 [2024-12-13 03:44:36.712338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.631 [2024-12-13 03:44:36.712349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.631 [2024-12-13 03:44:36.712358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.631 [2024-12-13 03:44:36.712370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.631 [2024-12-13 03:44:36.712379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.631 [2024-12-13 03:44:36.712390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.631 [2024-12-13 03:44:36.712399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.631 [2024-12-13 03:44:36.712409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.631 [2024-12-13 03:44:36.712421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.631 [2024-12-13 03:44:36.712432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.631 [2024-12-13 03:44:36.712441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.631 [2024-12-13 03:44:36.712452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.631 [2024-12-13 03:44:36.712462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.631 [2024-12-13 03:44:36.712474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.631 [2024-12-13 03:44:36.712484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.631 [2024-12-13 03:44:36.712495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.631 [2024-12-13 03:44:36.712505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.631 [2024-12-13 03:44:36.712516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.631 [2024-12-13 03:44:36.712526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.631 [2024-12-13 03:44:36.712537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.631 [2024-12-13 03:44:36.712547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.631 [2024-12-13 03:44:36.712558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.631 [2024-12-13 03:44:36.712567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.631 [2024-12-13 03:44:36.712578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.631 [2024-12-13 03:44:36.712588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.631 [2024-12-13 03:44:36.712599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.631 [2024-12-13 03:44:36.712608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.631 [2024-12-13 03:44:36.712619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.631 [2024-12-13 03:44:36.712628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.631 [2024-12-13 03:44:36.712640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.631 [2024-12-13 03:44:36.712649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.631 [2024-12-13 03:44:36.712660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.631 [2024-12-13 03:44:36.712670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.631 [2024-12-13 03:44:36.712681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.631 [2024-12-13 03:44:36.712692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.631 [2024-12-13 03:44:36.712704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.631 [2024-12-13 03:44:36.712713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.631 [2024-12-13 03:44:36.712734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.631 [2024-12-13 03:44:36.712744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.631 [2024-12-13 03:44:36.712779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.631 [2024-12-13 03:44:36.712791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82616 len:8 PRP1 0x0 PRP2 0x0 00:33:42.631 [2024-12-13 03:44:36.712801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.631 [2024-12-13 03:44:36.712814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.631 [2024-12-13 03:44:36.712822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.631 [2024-12-13 03:44:36.712832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82624 len:8 PRP1 0x0 PRP2 0x0 00:33:42.631 [2024-12-13 03:44:36.712843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.631 [2024-12-13 03:44:36.712852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.631 [2024-12-13 03:44:36.712859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.631 [2024-12-13 03:44:36.712867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82632 len:8 PRP1 0x0 PRP2 0x0 00:33:42.631 [2024-12-13 03:44:36.712877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.631 [2024-12-13 03:44:36.712887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.631 [2024-12-13 03:44:36.712894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.631 [2024-12-13 03:44:36.712903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82640 len:8 PRP1 0x0 PRP2 0x0 00:33:42.631 [2024-12-13 03:44:36.712912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.631 [2024-12-13 03:44:36.712927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.631 [2024-12-13 03:44:36.712934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.631 [2024-12-13 03:44:36.712942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82648 len:8 PRP1 0x0 PRP2 0x0 00:33:42.631 [2024-12-13 03:44:36.712952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.631 [2024-12-13 03:44:36.712961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.631 [2024-12-13 03:44:36.712968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.631 [2024-12-13 03:44:36.712976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82656 len:8 PRP1 0x0 PRP2 0x0 00:33:42.632 [2024-12-13 03:44:36.712985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.632 [2024-12-13 03:44:36.712993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.632 [2024-12-13 03:44:36.713003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.632 [2024-12-13 03:44:36.713012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82664 len:8 PRP1 0x0 PRP2 0x0 00:33:42.632 [2024-12-13 03:44:36.713021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.632 [2024-12-13 03:44:36.713030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.632 [2024-12-13 03:44:36.713037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.632 [2024-12-13 03:44:36.713045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82672 len:8 PRP1 0x0 PRP2 0x0 00:33:42.632 [2024-12-13 03:44:36.713055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.632 [2024-12-13 03:44:36.713065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.632 [2024-12-13 03:44:36.713073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.632 [2024-12-13 03:44:36.713080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82680 len:8 PRP1 0x0 PRP2 0x0 00:33:42.632 [2024-12-13 03:44:36.713089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.632 [2024-12-13 03:44:36.713098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.632 [2024-12-13 03:44:36.713106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.632 [2024-12-13 03:44:36.713116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82688 len:8 PRP1 0x0 PRP2 0x0 00:33:42.632 [2024-12-13 03:44:36.713125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.632 [2024-12-13 03:44:36.713134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.632 [2024-12-13 03:44:36.713140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.632 [2024-12-13 03:44:36.713149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82696 len:8 PRP1 0x0 PRP2 0x0 00:33:42.632 [2024-12-13 03:44:36.713163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.632 [2024-12-13 03:44:36.713172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.632 [2024-12-13 03:44:36.713179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.632 [2024-12-13 03:44:36.713186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82704 len:8 PRP1 0x0 PRP2 0x0 00:33:42.632 [2024-12-13 03:44:36.713195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.632 [2024-12-13 03:44:36.713204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.632 [2024-12-13 03:44:36.713211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.632 [2024-12-13 03:44:36.713219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82712 len:8 PRP1 0x0 PRP2 0x0 00:33:42.632 [2024-12-13 03:44:36.713228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.632 [2024-12-13 03:44:36.713237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.632 [2024-12-13 03:44:36.713244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.632 [2024-12-13 03:44:36.713251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82720 len:8 PRP1 0x0 PRP2 0x0 00:33:42.632 [2024-12-13 03:44:36.713261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.632 [2024-12-13 03:44:36.713271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.632 [2024-12-13 03:44:36.713279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.632 [2024-12-13 03:44:36.713286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82728 len:8 PRP1 0x0 PRP2 0x0 00:33:42.632 [2024-12-13 03:44:36.713296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.632 [2024-12-13 03:44:36.713304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.632 [2024-12-13 03:44:36.713312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.632 [2024-12-13 03:44:36.713319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82736 len:8 PRP1 0x0 PRP2 0x0 00:33:42.632 [2024-12-13 03:44:36.713329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.632 [2024-12-13 03:44:36.713339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.632 [2024-12-13 03:44:36.713345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.632 [2024-12-13 03:44:36.713353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82744 len:8 PRP1 0x0 PRP2 0x0 00:33:42.632 [2024-12-13 03:44:36.713362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.632 [2024-12-13 03:44:36.713371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.632 [2024-12-13 03:44:36.713378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.632 [2024-12-13 03:44:36.713387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82752 len:8 PRP1 0x0 PRP2 0x0 00:33:42.632 [2024-12-13 03:44:36.713396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.632 [2024-12-13 03:44:36.713404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.632 [2024-12-13 03:44:36.713412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.632 [2024-12-13 03:44:36.713420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82760 len:8 PRP1 0x0 PRP2 0x0 00:33:42.632 [2024-12-13 03:44:36.713429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.632 [2024-12-13 03:44:36.713437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.632 [2024-12-13 03:44:36.713444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.632 [2024-12-13 03:44:36.713452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82768 len:8 PRP1 0x0 PRP2 0x0 00:33:42.632 [2024-12-13 03:44:36.713461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.632 [2024-12-13 03:44:36.713470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.632 [2024-12-13 03:44:36.713477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.632 [2024-12-13 03:44:36.713485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82776 len:8 PRP1 0x0 PRP2 0x0 00:33:42.632 [2024-12-13 03:44:36.713494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.632 [2024-12-13 03:44:36.713502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.632 [2024-12-13 03:44:36.713509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.632 [2024-12-13 03:44:36.713518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82784 len:8 PRP1 0x0 PRP2 0x0 00:33:42.632 [2024-12-13 03:44:36.713529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.632 [2024-12-13 03:44:36.713537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.632 [2024-12-13 03:44:36.713544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.632 [2024-12-13 03:44:36.713552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82792 len:8 PRP1 0x0 PRP2 0x0 00:33:42.632 [2024-12-13 03:44:36.713561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.632 [2024-12-13 03:44:36.713570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.632 [2024-12-13 03:44:36.713577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.632 [2024-12-13 03:44:36.713585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82800 len:8 PRP1 0x0 PRP2 0x0 00:33:42.632 [2024-12-13 03:44:36.713594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.632 [2024-12-13 03:44:36.713604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.632 [2024-12-13 03:44:36.713611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.632 [2024-12-13 03:44:36.713619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82808 len:8 PRP1 0x0 PRP2 0x0 00:33:42.632 [2024-12-13 03:44:36.713628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.632 [2024-12-13 03:44:36.713637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.632 [2024-12-13 03:44:36.713644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.632 [2024-12-13 03:44:36.713654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82816 len:8 PRP1 0x0 PRP2 0x0 00:33:42.632 [2024-12-13 03:44:36.713662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.632 [2024-12-13 03:44:36.713671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.632 [2024-12-13 03:44:36.713679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.632 [2024-12-13 03:44:36.713687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82824 len:8 PRP1 0x0 PRP2 0x0 00:33:42.632 [2024-12-13 03:44:36.713696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.632 [2024-12-13 03:44:36.713704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.632 [2024-12-13 03:44:36.713711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.632 [2024-12-13 03:44:36.713719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82832 len:8 PRP1 0x0 PRP2 0x0 00:33:42.632 [2024-12-13 03:44:36.713729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.632 [2024-12-13 03:44:36.713737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.632 [2024-12-13 03:44:36.713744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.632 [2024-12-13 03:44:36.713752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82840 len:8 PRP1 0x0 PRP2 0x0 00:33:42.632 [2024-12-13 03:44:36.713761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.632 [2024-12-13 03:44:36.713769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.632 [2024-12-13 03:44:36.713776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.633 [2024-12-13 03:44:36.713786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82848 len:8 PRP1 0x0 PRP2 0x0 00:33:42.633 [2024-12-13 03:44:36.713795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.633 [2024-12-13 03:44:36.713804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.633 [2024-12-13 03:44:36.713811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.633 [2024-12-13 03:44:36.713819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82856 len:8 PRP1 0x0 PRP2 0x0 00:33:42.633 [2024-12-13 03:44:36.713829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.633 [2024-12-13 03:44:36.713838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.633 [2024-12-13 03:44:36.713845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.633 [2024-12-13 03:44:36.713852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82864 len:8 PRP1 0x0 PRP2 0x0 00:33:42.633 [2024-12-13 03:44:36.713861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.633 [2024-12-13 03:44:36.713870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.633 [2024-12-13 03:44:36.713876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.633 [2024-12-13 03:44:36.713885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82872 len:8 PRP1 0x0 PRP2 0x0 00:33:42.633 [2024-12-13 03:44:36.713895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.633 [2024-12-13 03:44:36.713904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.633 [2024-12-13 03:44:36.713910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.633 [2024-12-13 03:44:36.713923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82880 len:8 PRP1 0x0 PRP2 0x0 00:33:42.633 [2024-12-13 03:44:36.713932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.633 [2024-12-13 03:44:36.713941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.633 [2024-12-13 03:44:36.713949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.633 [2024-12-13 03:44:36.713957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82888 len:8 PRP1 0x0 PRP2 0x0 00:33:42.633 [2024-12-13 03:44:36.713966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.633 [2024-12-13 03:44:36.713975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.633 [2024-12-13 03:44:36.713982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.633 [2024-12-13 03:44:36.713990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82896 len:8 PRP1 0x0 PRP2 0x0 00:33:42.633 [2024-12-13 03:44:36.713999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.633 [2024-12-13 03:44:36.714008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.633 [2024-12-13 03:44:36.714015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.633 [2024-12-13 03:44:36.714023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82904 len:8 PRP1 0x0 PRP2 0x0 00:33:42.633 [2024-12-13 03:44:36.714032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.633 [2024-12-13 03:44:36.714042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.633 [2024-12-13 03:44:36.714050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.633 [2024-12-13 03:44:36.714058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82912 len:8 PRP1 0x0 PRP2 0x0 00:33:42.633 [2024-12-13 03:44:36.714067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.633 [2024-12-13 03:44:36.714077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.633 [2024-12-13 03:44:36.714084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.633 [2024-12-13 03:44:36.714091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82920 len:8 PRP1 0x0 PRP2 0x0 00:33:42.633 [2024-12-13 03:44:36.714101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.633 [2024-12-13 03:44:36.714109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.633 [2024-12-13 03:44:36.714117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.633 [2024-12-13 03:44:36.714124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82928 len:8 PRP1 0x0 PRP2 0x0 00:33:42.633 [2024-12-13 03:44:36.714133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.633 [2024-12-13 03:44:36.714142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.633 [2024-12-13 03:44:36.714149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.633 [2024-12-13 03:44:36.714158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82936 len:8 PRP1 0x0 PRP2 0x0 00:33:42.633 [2024-12-13 03:44:36.714167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.633 [2024-12-13 03:44:36.714176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.633 [2024-12-13 03:44:36.714183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.633 [2024-12-13 03:44:36.714192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82944 len:8 PRP1 0x0 PRP2 0x0 00:33:42.633 [2024-12-13 03:44:36.714201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.633 [2024-12-13 03:44:36.714211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.633 [2024-12-13 03:44:36.714218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.633 [2024-12-13 03:44:36.714225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82952 len:8 PRP1 0x0 PRP2 0x0 00:33:42.633 [2024-12-13 03:44:36.714245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.633 [2024-12-13 03:44:36.714255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.633 [2024-12-13 03:44:36.714263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.633 [2024-12-13 03:44:36.714271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82960 len:8 PRP1 0x0 PRP2 0x0 00:33:42.633 [2024-12-13 03:44:36.714280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.633 [2024-12-13 03:44:36.714289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.633 [2024-12-13 03:44:36.714295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.633 [2024-12-13 03:44:36.714303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82968 len:8 PRP1 0x0 PRP2 0x0 00:33:42.633 [2024-12-13 03:44:36.714316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.633 [2024-12-13 03:44:36.723729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:42.633 [2024-12-13 03:44:36.723743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:42.633 [2024-12-13 03:44:36.723755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82976 len:8 PRP1 0x0 PRP2 0x0 00:33:42.633 [2024-12-13 03:44:36.723768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.633 [2024-12-13 03:44:36.724149] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:33:42.633 [2024-12-13 03:44:36.724166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:33:42.633 [2024-12-13 03:44:36.724226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325580 (9): Bad file descriptor 00:33:42.633 [2024-12-13 03:44:36.728348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:33:42.633 [2024-12-13 03:44:36.847249] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:33:42.633 9515.90 IOPS, 37.17 MiB/s [2024-12-13T02:44:43.842Z] 9544.45 IOPS, 37.28 MiB/s [2024-12-13T02:44:43.842Z] 9551.75 IOPS, 37.31 MiB/s [2024-12-13T02:44:43.842Z] 9558.23 IOPS, 37.34 MiB/s [2024-12-13T02:44:43.842Z] 9587.07 IOPS, 37.45 MiB/s [2024-12-13T02:44:43.842Z] 9583.27 IOPS, 37.43 MiB/s 00:33:42.633 Latency(us) 00:33:42.633 [2024-12-13T02:44:43.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:42.633 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:42.633 Verification LBA range: start 0x0 length 0x4000 00:33:42.633 NVMe0n1 : 15.01 9584.40 37.44 656.39 0.00 12473.93 477.87 22219.82 00:33:42.633 [2024-12-13T02:44:43.842Z] =================================================================================================================== 00:33:42.633 [2024-12-13T02:44:43.842Z] Total : 9584.40 37.44 656.39 0.00 12473.93 477.87 22219.82 00:33:42.633 Received shutdown signal, test time was about 15.000000 seconds 00:33:42.633 00:33:42.633 Latency(us) 00:33:42.633 [2024-12-13T02:44:43.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:42.633 [2024-12-13T02:44:43.842Z] =================================================================================================================== 00:33:42.633 [2024-12-13T02:44:43.842Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:42.633 03:44:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:33:42.633 03:44:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:33:42.633 03:44:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:33:42.634 03:44:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2853797 00:33:42.634 03:44:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:33:42.634 03:44:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2853797 /var/tmp/bdevperf.sock 00:33:42.634 03:44:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2853797 ']' 00:33:42.634 03:44:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:42.634 03:44:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:42.634 03:44:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:42.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:42.634 03:44:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:42.634 03:44:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:43.571 03:44:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:43.571 03:44:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:43.571 03:44:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:43.571 [2024-12-13 03:44:44.710566] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:43.571 03:44:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:43.830 [2024-12-13 03:44:44.891109] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:43.830 03:44:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:44.396 NVMe0n1 00:33:44.396 03:44:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:44.654 00:33:44.655 03:44:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:44.913 00:33:45.172 03:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:45.172 03:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:33:45.172 03:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:45.431 03:44:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:33:48.720 03:44:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:48.720 03:44:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:33:48.720 03:44:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2854780 00:33:48.720 03:44:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:48.720 03:44:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2854780 00:33:49.657 { 00:33:49.657 "results": [ 00:33:49.657 { 00:33:49.657 "job": "NVMe0n1", 00:33:49.657 "core_mask": "0x1", 00:33:49.657 "workload": "verify", 00:33:49.657 "status": "finished", 00:33:49.657 "verify_range": { 00:33:49.657 "start": 0, 00:33:49.657 "length": 16384 00:33:49.657 }, 00:33:49.657 "queue_depth": 128, 00:33:49.657 "io_size": 4096, 00:33:49.657 "runtime": 1.015774, 00:33:49.657 "iops": 9784.656823269743, 00:33:49.657 "mibps": 38.22131571589743, 00:33:49.657 "io_failed": 0, 00:33:49.657 "io_timeout": 0, 00:33:49.657 "avg_latency_us": 13030.626883417419, 00:33:49.657 "min_latency_us": 2824.289523809524, 00:33:49.657 "max_latency_us": 11671.649523809523 00:33:49.657 } 00:33:49.657 ], 00:33:49.657 "core_count": 1 00:33:49.657 } 00:33:49.916 03:44:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:49.916 [2024-12-13 03:44:43.730937] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:33:49.916 [2024-12-13 03:44:43.731029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2853797 ] 00:33:49.916 [2024-12-13 03:44:43.845623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:49.916 [2024-12-13 03:44:43.953668] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:49.916 [2024-12-13 03:44:46.486896] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:49.916 [2024-12-13 03:44:46.486974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:49.916 [2024-12-13 03:44:46.486992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:49.916 [2024-12-13 03:44:46.487007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:49.916 [2024-12-13 03:44:46.487018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:49.916 [2024-12-13 03:44:46.487028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:49.916 [2024-12-13 03:44:46.487038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:49.916 [2024-12-13 03:44:46.487048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:49.916 [2024-12-13 03:44:46.487057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:49.916 [2024-12-13 03:44:46.487072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:33:49.916 [2024-12-13 03:44:46.487121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:33:49.917 [2024-12-13 03:44:46.487150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325580 (9): Bad file descriptor 00:33:49.917 [2024-12-13 03:44:46.536290] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:33:49.917 Running I/O for 1 seconds... 00:33:49.917 9702.00 IOPS, 37.90 MiB/s 00:33:49.917 Latency(us) 00:33:49.917 [2024-12-13T02:44:51.126Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:49.917 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:49.917 Verification LBA range: start 0x0 length 0x4000 00:33:49.917 NVMe0n1 : 1.02 9784.66 38.22 0.00 0.00 13030.63 2824.29 11671.65 00:33:49.917 [2024-12-13T02:44:51.126Z] =================================================================================================================== 00:33:49.917 [2024-12-13T02:44:51.126Z] Total : 9784.66 38.22 0.00 0.00 13030.63 2824.29 11671.65 00:33:49.917 03:44:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:49.917 03:44:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:33:49.917 03:44:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:50.176 03:44:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:50.176 03:44:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:33:50.434 03:44:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:50.693 03:44:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:33:53.983 03:44:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:53.984 03:44:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:33:53.984 03:44:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2853797 00:33:53.984 03:44:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2853797 ']' 00:33:53.984 03:44:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2853797 00:33:53.984 03:44:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:53.984 03:44:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:53.984 03:44:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2853797 00:33:53.984 03:44:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:53.984 03:44:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:53.984 03:44:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2853797' 00:33:53.984 killing process with pid 2853797 00:33:53.984 03:44:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2853797 00:33:53.984 03:44:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2853797 00:33:54.920 03:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:33:54.920 03:44:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:54.920 03:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:33:54.920 03:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:54.920 03:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:33:54.920 03:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:54.920 03:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:33:54.920 03:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:54.920 03:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:33:54.920 03:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:54.920 03:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:54.920 rmmod nvme_tcp 00:33:54.920 rmmod nvme_fabrics 00:33:54.920 rmmod nvme_keyring 00:33:54.920 03:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:54.920 03:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:33:54.920 03:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:33:54.920 03:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2850629 ']' 00:33:54.920 03:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2850629 00:33:54.920 03:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2850629 ']' 00:33:54.920 03:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2850629 00:33:54.920 03:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:54.920 03:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:54.920 03:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2850629 00:33:55.179 03:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:55.179 03:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:55.179 03:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2850629' 00:33:55.179 killing process with pid 2850629 00:33:55.179 03:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2850629 00:33:55.179 03:44:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2850629 00:33:56.558 03:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:56.558 03:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:56.558 03:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:56.558 03:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:33:56.558 03:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:56.558 03:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:33:56.558 03:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:33:56.558 03:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:56.558 03:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:56.558 03:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:56.558 03:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:56.558 03:44:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:58.464 03:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:58.464 00:33:58.464 real 0m41.714s 00:33:58.464 user 2m15.130s 00:33:58.464 sys 0m7.693s 00:33:58.464 03:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:58.464 03:44:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:58.464 ************************************ 00:33:58.464 END TEST nvmf_failover 00:33:58.464 ************************************ 00:33:58.464 03:44:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:58.464 03:44:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:58.464 03:44:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:58.464 03:44:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.464 ************************************ 00:33:58.464 START TEST nvmf_host_discovery 00:33:58.464 ************************************ 00:33:58.464 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:58.724 * Looking for test storage... 00:33:58.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:58.724 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:58.724 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:58.724 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:33:58.724 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:58.724 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:58.724 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:58.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.725 --rc genhtml_branch_coverage=1 00:33:58.725 --rc genhtml_function_coverage=1 00:33:58.725 --rc genhtml_legend=1 00:33:58.725 --rc geninfo_all_blocks=1 00:33:58.725 --rc geninfo_unexecuted_blocks=1 00:33:58.725 00:33:58.725 ' 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:58.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.725 --rc genhtml_branch_coverage=1 00:33:58.725 --rc genhtml_function_coverage=1 00:33:58.725 --rc genhtml_legend=1 00:33:58.725 --rc geninfo_all_blocks=1 00:33:58.725 --rc geninfo_unexecuted_blocks=1 00:33:58.725 00:33:58.725 ' 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:58.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.725 --rc genhtml_branch_coverage=1 00:33:58.725 --rc genhtml_function_coverage=1 00:33:58.725 --rc genhtml_legend=1 00:33:58.725 --rc geninfo_all_blocks=1 00:33:58.725 --rc geninfo_unexecuted_blocks=1 00:33:58.725 00:33:58.725 ' 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:58.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.725 --rc genhtml_branch_coverage=1 00:33:58.725 --rc genhtml_function_coverage=1 00:33:58.725 --rc genhtml_legend=1 00:33:58.725 --rc geninfo_all_blocks=1 00:33:58.725 --rc geninfo_unexecuted_blocks=1 00:33:58.725 00:33:58.725 ' 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:58.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:58.725 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:58.726 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:58.726 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:58.726 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:58.726 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:58.726 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:58.726 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:33:58.726 03:44:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:04.002 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:04.002 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:04.002 Found net devices under 0000:af:00.0: cvl_0_0 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:04.002 Found net devices under 0000:af:00.1: cvl_0_1 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:04.002 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:04.262 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:04.262 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:04.262 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:04.262 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:04.262 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:04.262 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:04.262 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:04.262 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:04.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:04.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:34:04.262 00:34:04.262 --- 10.0.0.2 ping statistics --- 00:34:04.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:04.262 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:34:04.262 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:04.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:04.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:34:04.262 00:34:04.262 --- 10.0.0.1 ping statistics --- 00:34:04.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:04.262 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:34:04.262 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:04.262 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:34:04.262 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:04.262 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:04.262 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:04.262 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:04.262 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:04.262 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:04.262 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:04.262 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:34:04.262 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:04.262 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:04.262 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:04.262 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2859547 00:34:04.262 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2859547 00:34:04.262 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:04.262 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2859547 ']' 00:34:04.262 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:04.262 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:04.262 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:04.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:04.262 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:04.262 03:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:04.521 [2024-12-13 03:45:05.508316] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:34:04.521 [2024-12-13 03:45:05.508424] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:04.521 [2024-12-13 03:45:05.628583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:04.780 [2024-12-13 03:45:05.732154] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:04.780 [2024-12-13 03:45:05.732203] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:04.780 [2024-12-13 03:45:05.732214] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:04.780 [2024-12-13 03:45:05.732225] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:04.780 [2024-12-13 03:45:05.732232] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:04.780 [2024-12-13 03:45:05.733578] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:34:05.348 03:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:05.348 03:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:34:05.348 03:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:05.348 03:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:05.348 03:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:05.348 03:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:05.348 03:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:05.348 03:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.348 03:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:05.348 [2024-12-13 03:45:06.345102] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:05.348 03:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.348 03:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:34:05.348 03:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.348 03:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:05.348 [2024-12-13 03:45:06.357302] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:05.348 03:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.348 03:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:34:05.348 03:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.349 03:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:05.349 null0 00:34:05.349 03:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.349 03:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:34:05.349 03:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.349 03:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:05.349 null1 00:34:05.349 03:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.349 03:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:34:05.349 03:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.349 03:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:05.349 03:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.349 03:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2859653 00:34:05.349 03:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:34:05.349 03:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2859653 /tmp/host.sock 00:34:05.349 03:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2859653 ']' 00:34:05.349 03:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:34:05.349 03:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:05.349 03:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:05.349 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:05.349 03:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:05.349 03:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:05.349 [2024-12-13 03:45:06.462232] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:34:05.349 [2024-12-13 03:45:06.462348] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2859653 ] 00:34:05.608 [2024-12-13 03:45:06.574613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:05.608 [2024-12-13 03:45:06.685228] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:34:06.175 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:06.175 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:34:06.175 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:06.175 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:34:06.175 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.175 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:06.175 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.175 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:34:06.175 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.175 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:06.175 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.175 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:34:06.175 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:34:06.175 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:06.175 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.175 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:06.175 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:06.175 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:06.175 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:06.175 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.175 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:34:06.175 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:34:06.175 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:06.175 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:06.175 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.175 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:06.175 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:06.175 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:06.175 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:06.435 [2024-12-13 03:45:07.604650] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:06.435 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:34:06.695 03:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:34:07.263 [2024-12-13 03:45:08.336669] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:07.264 [2024-12-13 03:45:08.336704] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:07.264 [2024-12-13 03:45:08.336731] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:07.264 [2024-12-13 03:45:08.465160] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:07.523 [2024-12-13 03:45:08.524999] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:34:07.523 [2024-12-13 03:45:08.526059] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x615000325f80:1 started. 00:34:07.523 [2024-12-13 03:45:08.527797] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:07.523 [2024-12-13 03:45:08.527820] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:07.523 [2024-12-13 03:45:08.535349] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x615000325f80 was disconnected and freed. delete nvme_qpair. 00:34:07.782 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:07.782 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:07.782 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:07.782 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:07.782 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:07.782 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.782 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:07.782 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:07.782 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:07.782 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.782 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.782 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:07.782 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:07.782 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:07.782 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:07.782 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:07.782 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:34:07.782 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:07.782 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:07.782 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:07.782 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.782 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:07.782 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:07.782 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:07.783 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.783 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:34:07.783 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:07.783 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:07.783 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:07.783 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:07.783 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:07.783 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:34:07.783 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:34:07.783 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:07.783 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:07.783 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:07.783 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.783 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:07.783 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:07.783 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.783 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:34:07.783 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:07.783 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:34:07.783 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:07.783 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:07.783 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:07.783 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:07.783 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:07.783 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:07.783 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:07.783 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:07.783 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:07.783 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.783 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:07.783 03:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.042 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:08.042 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:34:08.042 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:08.042 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:08.042 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:34:08.042 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.042 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:08.042 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.042 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:08.042 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:08.042 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:08.042 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:08.042 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:08.042 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:08.042 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:08.042 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:08.042 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:08.042 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:08.042 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.042 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:08.302 [2024-12-13 03:45:09.268417] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x615000326200:1 started. 00:34:08.302 [2024-12-13 03:45:09.277546] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x615000326200 was disconnected and freed. delete nvme_qpair. 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:08.302 [2024-12-13 03:45:09.349574] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:08.302 [2024-12-13 03:45:09.349860] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:08.302 [2024-12-13 03:45:09.349893] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:08.302 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:08.303 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:08.303 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:34:08.303 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:08.303 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:08.303 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:08.303 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:08.303 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.303 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:08.303 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.303 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:34:08.303 03:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:34:08.303 [2024-12-13 03:45:09.477503] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:34:08.562 [2024-12-13 03:45:09.542246] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:34:08.562 [2024-12-13 03:45:09.542309] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:08.562 [2024-12-13 03:45:09.542323] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:08.562 [2024-12-13 03:45:09.542333] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:09.499 [2024-12-13 03:45:10.558178] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:09.499 [2024-12-13 03:45:10.558214] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:09.499 [2024-12-13 03:45:10.563563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:09.499 [2024-12-13 03:45:10.563595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.499 [2024-12-13 03:45:10.563608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:09.499 [2024-12-13 03:45:10.563619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.499 [2024-12-13 03:45:10.563630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:09.499 [2024-12-13 03:45:10.563640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.499 [2024-12-13 03:45:10.563652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:09.499 [2024-12-13 03:45:10.563661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.499 [2024-12-13 03:45:10.563670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:09.499 [2024-12-13 03:45:10.573568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:34:09.499 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.499 [2024-12-13 03:45:10.583608] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:09.499 [2024-12-13 03:45:10.583634] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:09.499 [2024-12-13 03:45:10.583642] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:09.499 [2024-12-13 03:45:10.583653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:09.499 [2024-12-13 03:45:10.583686] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:09.499 [2024-12-13 03:45:10.583976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.499 [2024-12-13 03:45:10.583998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:34:09.499 [2024-12-13 03:45:10.584010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:34:09.499 [2024-12-13 03:45:10.584026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:34:09.499 [2024-12-13 03:45:10.584041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:09.499 [2024-12-13 03:45:10.584055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:09.499 [2024-12-13 03:45:10.584072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:09.499 [2024-12-13 03:45:10.584081] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:09.499 [2024-12-13 03:45:10.584090] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:09.499 [2024-12-13 03:45:10.584097] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:09.499 [2024-12-13 03:45:10.593722] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:09.499 [2024-12-13 03:45:10.593745] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:09.499 [2024-12-13 03:45:10.593753] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:09.499 [2024-12-13 03:45:10.593760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:09.499 [2024-12-13 03:45:10.593782] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:09.499 [2024-12-13 03:45:10.594084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.499 [2024-12-13 03:45:10.594103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:34:09.499 [2024-12-13 03:45:10.594114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:34:09.499 [2024-12-13 03:45:10.594130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:34:09.499 [2024-12-13 03:45:10.594145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:09.499 [2024-12-13 03:45:10.594154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:09.499 [2024-12-13 03:45:10.594164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:09.499 [2024-12-13 03:45:10.594173] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:09.499 [2024-12-13 03:45:10.594180] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:09.500 [2024-12-13 03:45:10.594186] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:09.500 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.500 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:09.500 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:09.500 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:09.500 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:09.500 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:09.500 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:09.500 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:09.500 [2024-12-13 03:45:10.603818] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:09.500 [2024-12-13 03:45:10.603844] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:09.500 [2024-12-13 03:45:10.603851] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:09.500 [2024-12-13 03:45:10.603861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:09.500 [2024-12-13 03:45:10.603884] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:09.500 [2024-12-13 03:45:10.604137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.500 [2024-12-13 03:45:10.604155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:34:09.500 [2024-12-13 03:45:10.604166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:34:09.500 [2024-12-13 03:45:10.604182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:34:09.500 [2024-12-13 03:45:10.604196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:09.500 [2024-12-13 03:45:10.604205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:09.500 [2024-12-13 03:45:10.604215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:09.500 [2024-12-13 03:45:10.604224] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:09.500 [2024-12-13 03:45:10.604231] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:09.500 [2024-12-13 03:45:10.604237] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:09.500 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:09.500 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:09.500 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:09.500 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.500 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:09.500 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:09.500 [2024-12-13 03:45:10.613925] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:09.500 [2024-12-13 03:45:10.613950] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:09.500 [2024-12-13 03:45:10.613957] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:09.500 [2024-12-13 03:45:10.613964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:09.500 [2024-12-13 03:45:10.613985] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:09.500 [2024-12-13 03:45:10.614208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.500 [2024-12-13 03:45:10.614225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:34:09.500 [2024-12-13 03:45:10.614236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:34:09.500 [2024-12-13 03:45:10.614250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:34:09.500 [2024-12-13 03:45:10.614264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:09.500 [2024-12-13 03:45:10.614273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:09.500 [2024-12-13 03:45:10.614282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:09.500 [2024-12-13 03:45:10.614290] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:09.500 [2024-12-13 03:45:10.614300] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:09.500 [2024-12-13 03:45:10.614306] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:09.500 [2024-12-13 03:45:10.624021] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:09.500 [2024-12-13 03:45:10.624043] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:09.500 [2024-12-13 03:45:10.624049] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:09.500 [2024-12-13 03:45:10.624056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:09.500 [2024-12-13 03:45:10.624077] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:09.500 [2024-12-13 03:45:10.624263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.500 [2024-12-13 03:45:10.624280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:34:09.500 [2024-12-13 03:45:10.624291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:34:09.500 [2024-12-13 03:45:10.624305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:34:09.500 [2024-12-13 03:45:10.624319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:09.500 [2024-12-13 03:45:10.624328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:09.500 [2024-12-13 03:45:10.624337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:09.500 [2024-12-13 03:45:10.624352] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:09.500 [2024-12-13 03:45:10.624359] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:09.500 [2024-12-13 03:45:10.624365] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:09.500 [2024-12-13 03:45:10.634112] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:09.500 [2024-12-13 03:45:10.634135] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:09.500 [2024-12-13 03:45:10.634141] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:09.500 [2024-12-13 03:45:10.634148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:09.500 [2024-12-13 03:45:10.634173] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:09.500 [2024-12-13 03:45:10.634411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.500 [2024-12-13 03:45:10.634428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:34:09.500 [2024-12-13 03:45:10.634439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:34:09.500 [2024-12-13 03:45:10.634454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:34:09.500 [2024-12-13 03:45:10.634467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:09.500 [2024-12-13 03:45:10.634476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:09.500 [2024-12-13 03:45:10.634489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:09.500 [2024-12-13 03:45:10.634497] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:09.500 [2024-12-13 03:45:10.634504] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:09.500 [2024-12-13 03:45:10.634510] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:09.500 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.500 [2024-12-13 03:45:10.644209] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:09.500 [2024-12-13 03:45:10.644231] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:09.500 [2024-12-13 03:45:10.644237] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:09.500 [2024-12-13 03:45:10.644244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:09.500 [2024-12-13 03:45:10.644264] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:09.500 [2024-12-13 03:45:10.644417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.500 [2024-12-13 03:45:10.644434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:34:09.500 [2024-12-13 03:45:10.644445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:34:09.500 [2024-12-13 03:45:10.644459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:34:09.500 [2024-12-13 03:45:10.644473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:09.500 [2024-12-13 03:45:10.644482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:09.500 [2024-12-13 03:45:10.644491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:09.500 [2024-12-13 03:45:10.644499] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:09.500 [2024-12-13 03:45:10.644506] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:09.500 [2024-12-13 03:45:10.644513] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:09.501 [2024-12-13 03:45:10.645871] bdev_nvme.c:7303:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:34:09.501 [2024-12-13 03:45:10.645901] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:09.501 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:09.501 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:09.501 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:09.501 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:09.501 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:09.501 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:09.501 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:34:09.501 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:34:09.501 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:09.501 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:09.501 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:09.501 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.501 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:09.501 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:09.501 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.501 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:34:09.501 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:09.501 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:34:09.501 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:09.501 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:09.501 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:09.501 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:09.501 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:09.501 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:09.501 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:09.501 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:09.501 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:09.501 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.501 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:09.760 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:34:09.761 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:34:09.761 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:09.761 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:09.761 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:09.761 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:09.761 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:09.761 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:09.761 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:09.761 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.761 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:09.761 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:09.761 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.761 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:34:09.761 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:34:09.761 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:09.761 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:09.761 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:09.761 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.761 03:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.143 [2024-12-13 03:45:11.915818] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:11.143 [2024-12-13 03:45:11.915841] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:11.143 [2024-12-13 03:45:11.915868] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:11.143 [2024-12-13 03:45:12.003154] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:34:11.143 [2024-12-13 03:45:12.068846] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:34:11.143 [2024-12-13 03:45:12.069884] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x615000327380:1 started. 00:34:11.143 [2024-12-13 03:45:12.071852] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:11.143 [2024-12-13 03:45:12.071886] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:11.143 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.143 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:11.143 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:34:11.143 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:11.143 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:11.143 [2024-12-13 03:45:12.075332] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x615000327380 was disconnected and freed. delete nvme_qpair. 00:34:11.143 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:11.143 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.144 request: 00:34:11.144 { 00:34:11.144 "name": "nvme", 00:34:11.144 "trtype": "tcp", 00:34:11.144 "traddr": "10.0.0.2", 00:34:11.144 "adrfam": "ipv4", 00:34:11.144 "trsvcid": "8009", 00:34:11.144 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:11.144 "wait_for_attach": true, 00:34:11.144 "method": "bdev_nvme_start_discovery", 00:34:11.144 "req_id": 1 00:34:11.144 } 00:34:11.144 Got JSON-RPC error response 00:34:11.144 response: 00:34:11.144 { 00:34:11.144 "code": -17, 00:34:11.144 "message": "File exists" 00:34:11.144 } 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.144 request: 00:34:11.144 { 00:34:11.144 "name": "nvme_second", 00:34:11.144 "trtype": "tcp", 00:34:11.144 "traddr": "10.0.0.2", 00:34:11.144 "adrfam": "ipv4", 00:34:11.144 "trsvcid": "8009", 00:34:11.144 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:11.144 "wait_for_attach": true, 00:34:11.144 "method": "bdev_nvme_start_discovery", 00:34:11.144 "req_id": 1 00:34:11.144 } 00:34:11.144 Got JSON-RPC error response 00:34:11.144 response: 00:34:11.144 { 00:34:11.144 "code": -17, 00:34:11.144 "message": "File exists" 00:34:11.144 } 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.144 03:45:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:12.520 [2024-12-13 03:45:13.315520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.520 [2024-12-13 03:45:13.315556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000327600 with addr=10.0.0.2, port=8010 00:34:12.520 [2024-12-13 03:45:13.315611] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:12.520 [2024-12-13 03:45:13.315621] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:12.520 [2024-12-13 03:45:13.315634] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:13.455 [2024-12-13 03:45:14.317982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.455 [2024-12-13 03:45:14.318024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000327880 with addr=10.0.0.2, port=8010 00:34:13.455 [2024-12-13 03:45:14.318083] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:13.455 [2024-12-13 03:45:14.318092] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:13.455 [2024-12-13 03:45:14.318101] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:14.389 [2024-12-13 03:45:15.320063] bdev_nvme.c:7559:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:34:14.389 request: 00:34:14.390 { 00:34:14.390 "name": "nvme_second", 00:34:14.390 "trtype": "tcp", 00:34:14.390 "traddr": "10.0.0.2", 00:34:14.390 "adrfam": "ipv4", 00:34:14.390 "trsvcid": "8010", 00:34:14.390 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:14.390 "wait_for_attach": false, 00:34:14.390 "attach_timeout_ms": 3000, 00:34:14.390 "method": "bdev_nvme_start_discovery", 00:34:14.390 "req_id": 1 00:34:14.390 } 00:34:14.390 Got JSON-RPC error response 00:34:14.390 response: 00:34:14.390 { 00:34:14.390 "code": -110, 00:34:14.390 "message": "Connection timed out" 00:34:14.390 } 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2859653 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:14.390 rmmod nvme_tcp 00:34:14.390 rmmod nvme_fabrics 00:34:14.390 rmmod nvme_keyring 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2859547 ']' 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2859547 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2859547 ']' 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2859547 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2859547 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2859547' 00:34:14.390 killing process with pid 2859547 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2859547 00:34:14.390 03:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2859547 00:34:15.767 03:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:15.767 03:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:15.767 03:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:15.767 03:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:34:15.767 03:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:34:15.767 03:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:15.768 03:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:34:15.768 03:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:15.768 03:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:15.768 03:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:15.768 03:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:15.768 03:45:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:17.671 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:17.671 00:34:17.671 real 0m19.067s 00:34:17.671 user 0m24.030s 00:34:17.671 sys 0m5.765s 00:34:17.671 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:17.671 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:17.671 ************************************ 00:34:17.671 END TEST nvmf_host_discovery 00:34:17.671 ************************************ 00:34:17.671 03:45:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:17.671 03:45:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:17.671 03:45:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:17.671 03:45:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.671 ************************************ 00:34:17.671 START TEST nvmf_host_multipath_status 00:34:17.671 ************************************ 00:34:17.671 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:17.671 * Looking for test storage... 00:34:17.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:17.672 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:17.672 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:34:17.672 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:17.931 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:17.931 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:17.931 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:17.931 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:17.931 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:34:17.931 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:34:17.931 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:34:17.931 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:34:17.931 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:34:17.931 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:34:17.931 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:34:17.931 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:17.931 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:34:17.931 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:34:17.931 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:17.931 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:17.931 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:34:17.931 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:34:17.931 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:17.931 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:34:17.931 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:34:17.931 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:34:17.931 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:34:17.931 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:17.931 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:34:17.931 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:34:17.931 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:17.931 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:17.931 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:34:17.931 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:17.931 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:17.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.931 --rc genhtml_branch_coverage=1 00:34:17.931 --rc genhtml_function_coverage=1 00:34:17.931 --rc genhtml_legend=1 00:34:17.931 --rc geninfo_all_blocks=1 00:34:17.931 --rc geninfo_unexecuted_blocks=1 00:34:17.931 00:34:17.931 ' 00:34:17.931 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:17.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.931 --rc genhtml_branch_coverage=1 00:34:17.931 --rc genhtml_function_coverage=1 00:34:17.931 --rc genhtml_legend=1 00:34:17.931 --rc geninfo_all_blocks=1 00:34:17.931 --rc geninfo_unexecuted_blocks=1 00:34:17.931 00:34:17.931 ' 00:34:17.931 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:17.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.931 --rc genhtml_branch_coverage=1 00:34:17.931 --rc genhtml_function_coverage=1 00:34:17.931 --rc genhtml_legend=1 00:34:17.931 --rc geninfo_all_blocks=1 00:34:17.931 --rc geninfo_unexecuted_blocks=1 00:34:17.931 00:34:17.931 ' 00:34:17.931 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:17.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:17.931 --rc genhtml_branch_coverage=1 00:34:17.931 --rc genhtml_function_coverage=1 00:34:17.932 --rc genhtml_legend=1 00:34:17.932 --rc geninfo_all_blocks=1 00:34:17.932 --rc geninfo_unexecuted_blocks=1 00:34:17.932 00:34:17.932 ' 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:17.932 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:34:17.932 03:45:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:23.195 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:23.195 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:34:23.195 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:23.195 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:23.195 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:23.195 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:23.195 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:23.195 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:34:23.195 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:23.195 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:34:23.195 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:34:23.195 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:34:23.195 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:34:23.195 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:34:23.195 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:34:23.195 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:23.195 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:23.195 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:23.195 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:23.195 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:23.195 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:23.195 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:23.195 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:23.195 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:23.195 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:23.195 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:23.195 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:23.195 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:23.196 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:23.196 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:23.196 Found net devices under 0000:af:00.0: cvl_0_0 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:23.196 Found net devices under 0000:af:00.1: cvl_0_1 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:23.196 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:23.455 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:23.455 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:23.455 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:23.455 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:23.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:23.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:34:23.455 00:34:23.455 --- 10.0.0.2 ping statistics --- 00:34:23.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:23.455 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:34:23.455 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:23.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:23.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:34:23.455 00:34:23.455 --- 10.0.0.1 ping statistics --- 00:34:23.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:23.455 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:34:23.455 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:23.455 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:34:23.455 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:23.455 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:23.455 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:23.455 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:23.455 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:23.455 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:23.455 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:23.455 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:34:23.455 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:23.455 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:23.455 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:23.455 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2865237 00:34:23.455 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2865237 00:34:23.455 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:23.455 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2865237 ']' 00:34:23.455 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:23.455 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:23.455 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:23.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:23.455 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:23.455 03:45:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:23.455 [2024-12-13 03:45:24.618790] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:34:23.455 [2024-12-13 03:45:24.618884] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:23.766 [2024-12-13 03:45:24.736451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:23.766 [2024-12-13 03:45:24.848996] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:23.766 [2024-12-13 03:45:24.849043] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:23.766 [2024-12-13 03:45:24.849054] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:23.766 [2024-12-13 03:45:24.849066] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:23.766 [2024-12-13 03:45:24.849074] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:23.766 [2024-12-13 03:45:24.853946] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:34:23.766 [2024-12-13 03:45:24.853949] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:34:24.429 03:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:24.429 03:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:34:24.429 03:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:24.429 03:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:24.429 03:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:24.429 03:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:24.429 03:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2865237 00:34:24.429 03:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:24.429 [2024-12-13 03:45:25.631836] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:24.689 03:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:24.689 Malloc0 00:34:24.948 03:45:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:34:24.948 03:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:25.206 03:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:25.465 [2024-12-13 03:45:26.437218] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:25.465 03:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:25.465 [2024-12-13 03:45:26.621645] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:25.465 03:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2865621 00:34:25.465 03:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:34:25.465 03:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:25.465 03:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2865621 /var/tmp/bdevperf.sock 00:34:25.465 03:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2865621 ']' 00:34:25.465 03:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:25.465 03:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:25.465 03:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:25.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:25.465 03:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:25.465 03:45:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:26.402 03:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:26.402 03:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:34:26.402 03:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:26.661 03:45:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:27.228 Nvme0n1 00:34:27.228 03:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:27.487 Nvme0n1 00:34:27.487 03:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:34:27.487 03:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:34:29.392 03:45:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:34:29.392 03:45:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:29.651 03:45:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:29.910 03:45:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:34:30.846 03:45:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:34:30.846 03:45:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:30.846 03:45:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:30.846 03:45:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:31.105 03:45:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:31.105 03:45:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:31.105 03:45:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:31.105 03:45:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:31.364 03:45:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:31.364 03:45:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:31.364 03:45:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:31.364 03:45:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:31.364 03:45:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:31.364 03:45:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:31.364 03:45:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:31.364 03:45:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:31.623 03:45:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:31.623 03:45:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:31.623 03:45:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:31.623 03:45:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:31.882 03:45:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:31.882 03:45:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:31.882 03:45:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:31.882 03:45:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:32.141 03:45:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:32.141 03:45:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:34:32.141 03:45:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:32.400 03:45:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:32.400 03:45:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:34:33.779 03:45:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:34:33.779 03:45:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:33.779 03:45:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:33.779 03:45:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:33.779 03:45:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:33.779 03:45:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:33.779 03:45:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:33.779 03:45:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:33.779 03:45:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:33.779 03:45:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:34.040 03:45:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:34.040 03:45:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:34.040 03:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:34.040 03:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:34.040 03:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:34.040 03:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:34.298 03:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:34.298 03:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:34.298 03:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:34.298 03:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:34.557 03:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:34.557 03:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:34.557 03:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:34.557 03:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:34.815 03:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:34.815 03:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:34:34.815 03:45:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:35.074 03:45:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:35.074 03:45:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:34:36.451 03:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:34:36.452 03:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:36.452 03:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:36.452 03:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:36.452 03:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:36.452 03:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:36.452 03:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:36.452 03:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:36.452 03:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:36.452 03:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:36.452 03:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:36.452 03:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:36.709 03:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:36.709 03:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:36.710 03:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:36.710 03:45:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:36.967 03:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:36.967 03:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:36.967 03:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:36.967 03:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:37.226 03:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:37.226 03:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:37.226 03:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:37.226 03:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:37.485 03:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:37.485 03:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:34:37.485 03:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:37.485 03:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:37.743 03:45:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:34:38.680 03:45:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:34:38.680 03:45:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:38.680 03:45:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.680 03:45:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:38.948 03:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:38.948 03:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:38.948 03:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.948 03:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:39.207 03:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:39.207 03:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:39.207 03:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:39.207 03:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:39.466 03:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:39.467 03:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:39.467 03:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:39.467 03:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:39.726 03:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:39.726 03:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:39.726 03:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:39.726 03:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:39.726 03:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:39.726 03:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:39.726 03:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:39.726 03:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:39.984 03:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:39.984 03:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:34:39.984 03:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:40.243 03:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:40.502 03:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:34:41.438 03:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:34:41.438 03:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:41.438 03:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:41.438 03:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:41.697 03:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:41.697 03:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:41.697 03:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:41.697 03:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:41.956 03:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:41.956 03:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:41.956 03:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:41.956 03:45:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:41.956 03:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:41.956 03:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:41.956 03:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:41.956 03:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:42.214 03:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:42.214 03:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:42.214 03:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:42.214 03:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:42.473 03:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:42.473 03:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:42.473 03:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:42.473 03:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:42.732 03:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:42.732 03:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:34:42.732 03:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:42.732 03:45:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:42.991 03:45:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:34:43.927 03:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:34:43.928 03:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:43.928 03:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:43.928 03:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:44.188 03:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:44.188 03:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:44.188 03:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.188 03:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:44.447 03:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:44.447 03:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:44.447 03:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.447 03:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:44.707 03:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:44.707 03:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:44.707 03:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.707 03:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:44.966 03:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:44.966 03:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:44.966 03:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.966 03:45:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:44.966 03:45:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:44.966 03:45:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:44.966 03:45:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.966 03:45:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:45.225 03:45:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:45.225 03:45:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:34:45.484 03:45:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:34:45.484 03:45:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:45.743 03:45:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:45.743 03:45:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:34:47.121 03:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:34:47.121 03:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:47.121 03:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.121 03:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:47.121 03:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:47.121 03:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:47.121 03:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.121 03:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:47.381 03:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:47.381 03:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:47.381 03:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.381 03:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:47.381 03:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:47.381 03:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:47.381 03:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.381 03:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:47.640 03:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:47.640 03:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:47.640 03:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.640 03:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:47.899 03:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:47.899 03:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:47.899 03:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.899 03:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:48.158 03:45:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:48.158 03:45:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:34:48.158 03:45:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:48.158 03:45:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:48.417 03:45:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:34:49.353 03:45:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:34:49.353 03:45:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:49.353 03:45:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:49.353 03:45:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:49.612 03:45:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:49.612 03:45:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:49.612 03:45:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:49.612 03:45:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:49.870 03:45:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:49.870 03:45:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:49.870 03:45:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:49.870 03:45:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.129 03:45:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:50.129 03:45:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:50.129 03:45:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:50.129 03:45:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.388 03:45:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:50.388 03:45:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:50.388 03:45:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.388 03:45:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:50.388 03:45:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:50.388 03:45:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:50.388 03:45:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.388 03:45:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:50.647 03:45:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:50.647 03:45:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:34:50.647 03:45:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:50.906 03:45:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:51.165 03:45:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:34:52.099 03:45:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:34:52.099 03:45:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:52.100 03:45:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.100 03:45:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:52.358 03:45:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:52.358 03:45:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:52.358 03:45:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:52.358 03:45:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.616 03:45:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:52.616 03:45:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:52.616 03:45:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.617 03:45:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:52.617 03:45:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:52.617 03:45:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:52.617 03:45:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.617 03:45:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:52.875 03:45:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:52.875 03:45:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:52.875 03:45:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.875 03:45:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:53.134 03:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:53.134 03:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:53.134 03:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.134 03:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:53.392 03:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:53.392 03:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:34:53.392 03:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:53.392 03:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:53.651 03:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:34:55.025 03:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:34:55.025 03:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:55.025 03:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.025 03:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:55.025 03:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.025 03:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:55.025 03:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.025 03:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:55.025 03:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:55.025 03:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:55.025 03:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.025 03:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:55.283 03:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.283 03:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:55.283 03:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.283 03:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:55.541 03:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.541 03:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:55.541 03:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.542 03:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:55.800 03:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.800 03:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:55.800 03:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:55.800 03:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.800 03:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:55.800 03:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2865621 00:34:55.800 03:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2865621 ']' 00:34:55.800 03:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2865621 00:34:55.800 03:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:34:55.800 03:45:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:55.800 03:45:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2865621 00:34:56.059 03:45:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:34:56.059 03:45:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:34:56.059 03:45:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2865621' 00:34:56.059 killing process with pid 2865621 00:34:56.059 03:45:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2865621 00:34:56.059 03:45:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2865621 00:34:56.059 { 00:34:56.059 "results": [ 00:34:56.059 { 00:34:56.059 "job": "Nvme0n1", 00:34:56.059 "core_mask": "0x4", 00:34:56.059 "workload": "verify", 00:34:56.059 "status": "terminated", 00:34:56.059 "verify_range": { 00:34:56.059 "start": 0, 00:34:56.059 "length": 16384 00:34:56.059 }, 00:34:56.059 "queue_depth": 128, 00:34:56.059 "io_size": 4096, 00:34:56.059 "runtime": 28.392252, 00:34:56.059 "iops": 9265.168539642435, 00:34:56.059 "mibps": 36.19206460797826, 00:34:56.059 "io_failed": 0, 00:34:56.059 "io_timeout": 0, 00:34:56.059 "avg_latency_us": 13791.507227576505, 00:34:56.059 "min_latency_us": 862.1104761904762, 00:34:56.059 "max_latency_us": 3019898.88 00:34:56.059 } 00:34:56.059 ], 00:34:56.059 "core_count": 1 00:34:56.059 } 00:34:56.997 03:45:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2865621 00:34:56.997 03:45:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:56.997 [2024-12-13 03:45:26.709984] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:34:56.997 [2024-12-13 03:45:26.710080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2865621 ] 00:34:56.997 [2024-12-13 03:45:26.820607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:56.997 [2024-12-13 03:45:26.928143] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:34:56.997 Running I/O for 90 seconds... 00:34:56.997 9931.00 IOPS, 38.79 MiB/s [2024-12-13T02:45:58.206Z] 9937.50 IOPS, 38.82 MiB/s [2024-12-13T02:45:58.206Z] 9987.67 IOPS, 39.01 MiB/s [2024-12-13T02:45:58.206Z] 9937.50 IOPS, 38.82 MiB/s [2024-12-13T02:45:58.206Z] 9959.20 IOPS, 38.90 MiB/s [2024-12-13T02:45:58.206Z] 9914.67 IOPS, 38.73 MiB/s [2024-12-13T02:45:58.206Z] 9899.71 IOPS, 38.67 MiB/s [2024-12-13T02:45:58.206Z] 9921.12 IOPS, 38.75 MiB/s [2024-12-13T02:45:58.206Z] 9922.22 IOPS, 38.76 MiB/s [2024-12-13T02:45:58.206Z] 9909.00 IOPS, 38.71 MiB/s [2024-12-13T02:45:58.206Z] 9921.00 IOPS, 38.75 MiB/s [2024-12-13T02:45:58.206Z] 9913.83 IOPS, 38.73 MiB/s [2024-12-13T02:45:58.206Z] [2024-12-13 03:45:41.320891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:90144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.997 [2024-12-13 03:45:41.320964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:56.997 [2024-12-13 03:45:41.321038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.997 [2024-12-13 03:45:41.321051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:56.997 [2024-12-13 03:45:41.321070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.997 [2024-12-13 03:45:41.321081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:56.997 [2024-12-13 03:45:41.321098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.997 [2024-12-13 03:45:41.321109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:56.997 [2024-12-13 03:45:41.321126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.997 [2024-12-13 03:45:41.321137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:56.997 [2024-12-13 03:45:41.321153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.997 [2024-12-13 03:45:41.321164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:56.997 [2024-12-13 03:45:41.321181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.997 [2024-12-13 03:45:41.321191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:56.997 [2024-12-13 03:45:41.321208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.997 [2024-12-13 03:45:41.321218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:56.997 [2024-12-13 03:45:41.321234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:90208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.997 [2024-12-13 03:45:41.321245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:56.997 [2024-12-13 03:45:41.321261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.321279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.321296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:90224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.321306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.321323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.321333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.321349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.321359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.321375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.321386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.321403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.321413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.321429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:90264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.321439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.321456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:90272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.321466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.321552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.321566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.321586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.321596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.321615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:90296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.321624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.321642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:90304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.321651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.321669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:90312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.321682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.321700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.321710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.321727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.321737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.321755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.321765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.321782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:90344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.321793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.321810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.321819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.321837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.321846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.321864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.321874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.321891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.321901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.321924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.321935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.321952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.321962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.321979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:90400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.321989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.322008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.322018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.322037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:90416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.322048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.322065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:90424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.322075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.322092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.322103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.322120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:90440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.322130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.322147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.322157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.322175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.322185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.322202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:90464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.322212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.322229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:90472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.322239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.322256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:90480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.322266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.322284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.322294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.322311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:90496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.322321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.322338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.322348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.322368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:90512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.322378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:56.998 [2024-12-13 03:45:41.322396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.998 [2024-12-13 03:45:41.322406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:56.999 [2024-12-13 03:45:41.322423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:90528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.999 [2024-12-13 03:45:41.322433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:56.999 [2024-12-13 03:45:41.322452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:90536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.999 [2024-12-13 03:45:41.322461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:56.999 [2024-12-13 03:45:41.322479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.999 [2024-12-13 03:45:41.322489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:56.999 [2024-12-13 03:45:41.322507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.999 [2024-12-13 03:45:41.322517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:56.999 [2024-12-13 03:45:41.322534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:90560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.999 [2024-12-13 03:45:41.322544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:56.999 [2024-12-13 03:45:41.322562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:90568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.999 [2024-12-13 03:45:41.322572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:56.999 [2024-12-13 03:45:41.322590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:90576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.999 [2024-12-13 03:45:41.322599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:56.999 [2024-12-13 03:45:41.322617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.999 [2024-12-13 03:45:41.322627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:56.999 [2024-12-13 03:45:41.322645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.999 [2024-12-13 03:45:41.322655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:56.999 [2024-12-13 03:45:41.322673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:90600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.999 [2024-12-13 03:45:41.322683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:56.999 [2024-12-13 03:45:41.322700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:90608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.999 [2024-12-13 03:45:41.322711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:56.999 [2024-12-13 03:45:41.322729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:90616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.999 [2024-12-13 03:45:41.322739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:56.999 [2024-12-13 03:45:41.322757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:90624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.999 [2024-12-13 03:45:41.322767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:56.999 [2024-12-13 03:45:41.322784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:90632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.999 [2024-12-13 03:45:41.322795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:56.999 [2024-12-13 03:45:41.322813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:90640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.999 [2024-12-13 03:45:41.322823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:56.999 [2024-12-13 03:45:41.322842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:90648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.999 [2024-12-13 03:45:41.322852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:56.999 [2024-12-13 03:45:41.322969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.999 [2024-12-13 03:45:41.322988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:56.999 [2024-12-13 03:45:41.323011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.999 [2024-12-13 03:45:41.323021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:56.999 [2024-12-13 03:45:41.323042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:90672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.999 [2024-12-13 03:45:41.323052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:56.999 [2024-12-13 03:45:41.323073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:90680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.999 [2024-12-13 03:45:41.323082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:56.999 [2024-12-13 03:45:41.323102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:90688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.999 [2024-12-13 03:45:41.323112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:56.999 [2024-12-13 03:45:41.323132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:90696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.999 [2024-12-13 03:45:41.323141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:56.999 [2024-12-13 03:45:41.323161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:90704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.999 [2024-12-13 03:45:41.323176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:56.999 [2024-12-13 03:45:41.323196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:90712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.999 [2024-12-13 03:45:41.323206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:56.999 [2024-12-13 03:45:41.323226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.999 [2024-12-13 03:45:41.323236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:56.999 [2024-12-13 03:45:41.323256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:90728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.999 [2024-12-13 03:45:41.323266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:56.999 [2024-12-13 03:45:41.323285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:90736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.999 [2024-12-13 03:45:41.323295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:56.999 [2024-12-13 03:45:41.323315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:90744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.999 [2024-12-13 03:45:41.323324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:56.999 [2024-12-13 03:45:41.323344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:90752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.999 [2024-12-13 03:45:41.323355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:56.999 [2024-12-13 03:45:41.323375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:90760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.999 [2024-12-13 03:45:41.323385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:56.999 [2024-12-13 03:45:41.323405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:90768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.999 [2024-12-13 03:45:41.323415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:56.999 [2024-12-13 03:45:41.323435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:90776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.999 [2024-12-13 03:45:41.323444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:56.999 [2024-12-13 03:45:41.323464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:90784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.999 [2024-12-13 03:45:41.323474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:56.999 [2024-12-13 03:45:41.323494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:90792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.999 [2024-12-13 03:45:41.323503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:56.999 [2024-12-13 03:45:41.323523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:90800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.999 [2024-12-13 03:45:41.323533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:56.999 [2024-12-13 03:45:41.323554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:90808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:56.999 [2024-12-13 03:45:41.323564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.323584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.000 [2024-12-13 03:45:41.323594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.323613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:90824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.000 [2024-12-13 03:45:41.323623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.323643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:90832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.000 [2024-12-13 03:45:41.323653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.323672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:90840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.000 [2024-12-13 03:45:41.323682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.323702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.000 [2024-12-13 03:45:41.323712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.323732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:90856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.000 [2024-12-13 03:45:41.323742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.323762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:90864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.000 [2024-12-13 03:45:41.323771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.323791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:90872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.000 [2024-12-13 03:45:41.323801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.323821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:90880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.000 [2024-12-13 03:45:41.323831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.323851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:90888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.000 [2024-12-13 03:45:41.323860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.323880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.000 [2024-12-13 03:45:41.323890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.323912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:89960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.000 [2024-12-13 03:45:41.323927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.323948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:89968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.000 [2024-12-13 03:45:41.323957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.323987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:89976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.000 [2024-12-13 03:45:41.323997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.324016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:89984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.000 [2024-12-13 03:45:41.324026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.324048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:89992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.000 [2024-12-13 03:45:41.324058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.324078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:90000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.000 [2024-12-13 03:45:41.324087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.324108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:90008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.000 [2024-12-13 03:45:41.324118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.324137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:90904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.000 [2024-12-13 03:45:41.324147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.324167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:90912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.000 [2024-12-13 03:45:41.324176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.324196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:90920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.000 [2024-12-13 03:45:41.324205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.324225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:90928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.000 [2024-12-13 03:45:41.324235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.324254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:90936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.000 [2024-12-13 03:45:41.324264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.324284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:90944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.000 [2024-12-13 03:45:41.324295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.324315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.000 [2024-12-13 03:45:41.324325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.324344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.000 [2024-12-13 03:45:41.324354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.324473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:90968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.000 [2024-12-13 03:45:41.324485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.324509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:90016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.000 [2024-12-13 03:45:41.324519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.324541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:90024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.000 [2024-12-13 03:45:41.324551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.324576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:90032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.000 [2024-12-13 03:45:41.324585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.324607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:90040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.000 [2024-12-13 03:45:41.324621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.324643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:90048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.000 [2024-12-13 03:45:41.324653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.324676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:90056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.000 [2024-12-13 03:45:41.324685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.324707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:90064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.000 [2024-12-13 03:45:41.324717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.324739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:90072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.000 [2024-12-13 03:45:41.324749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.324771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:90080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.000 [2024-12-13 03:45:41.324783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.324805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:90088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.000 [2024-12-13 03:45:41.324814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.324837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:90096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.000 [2024-12-13 03:45:41.324847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:57.000 [2024-12-13 03:45:41.324869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:90104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.000 [2024-12-13 03:45:41.324878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:57.001 [2024-12-13 03:45:41.324900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:90112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.001 [2024-12-13 03:45:41.324910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:57.001 [2024-12-13 03:45:41.324937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:90120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.001 [2024-12-13 03:45:41.324947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:57.001 [2024-12-13 03:45:41.324969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:90128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.001 [2024-12-13 03:45:41.324979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:57.001 [2024-12-13 03:45:41.325001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:90976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.001 [2024-12-13 03:45:41.325011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:57.001 [2024-12-13 03:45:41.325033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:90136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.001 [2024-12-13 03:45:41.325043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:57.001 9687.15 IOPS, 37.84 MiB/s [2024-12-13T02:45:58.210Z] 8995.21 IOPS, 35.14 MiB/s [2024-12-13T02:45:58.210Z] 8395.53 IOPS, 32.80 MiB/s [2024-12-13T02:45:58.210Z] 8058.25 IOPS, 31.48 MiB/s [2024-12-13T02:45:58.210Z] 8167.35 IOPS, 31.90 MiB/s [2024-12-13T02:45:58.210Z] 8258.83 IOPS, 32.26 MiB/s [2024-12-13T02:45:58.210Z] 8459.21 IOPS, 33.04 MiB/s [2024-12-13T02:45:58.210Z] 8648.50 IOPS, 33.78 MiB/s [2024-12-13T02:45:58.210Z] 8787.19 IOPS, 34.32 MiB/s [2024-12-13T02:45:58.210Z] 8823.09 IOPS, 34.47 MiB/s [2024-12-13T02:45:58.210Z] 8866.00 IOPS, 34.63 MiB/s [2024-12-13T02:45:58.210Z] 8952.54 IOPS, 34.97 MiB/s [2024-12-13T02:45:58.210Z] 9088.60 IOPS, 35.50 MiB/s [2024-12-13T02:45:58.210Z] 9208.19 IOPS, 35.97 MiB/s [2024-12-13T02:45:58.210Z] [2024-12-13 03:45:54.777723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.001 [2024-12-13 03:45:54.777779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:57.001 [2024-12-13 03:45:54.777854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:91320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.001 [2024-12-13 03:45:54.777868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:57.001 [2024-12-13 03:45:54.779335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:91352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.001 [2024-12-13 03:45:54.779370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:57.001 [2024-12-13 03:45:54.779396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:91584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.001 [2024-12-13 03:45:54.779407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:57.001 [2024-12-13 03:45:54.779425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:91600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.001 [2024-12-13 03:45:54.779434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:57.001 [2024-12-13 03:45:54.779452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:91616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.001 [2024-12-13 03:45:54.779462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:57.001 [2024-12-13 03:45:54.779479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.001 [2024-12-13 03:45:54.779488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:57.001 [2024-12-13 03:45:54.779505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:91648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.001 [2024-12-13 03:45:54.779515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:57.001 [2024-12-13 03:45:54.779532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:91664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.001 [2024-12-13 03:45:54.779542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:57.001 [2024-12-13 03:45:54.779559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:91680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.001 [2024-12-13 03:45:54.779569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:57.001 [2024-12-13 03:45:54.779586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:91696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.001 [2024-12-13 03:45:54.779595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:57.001 [2024-12-13 03:45:54.779612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:91712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.001 [2024-12-13 03:45:54.779621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:57.001 [2024-12-13 03:45:54.779638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:91728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.001 [2024-12-13 03:45:54.779648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:57.001 [2024-12-13 03:45:54.779665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:91744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.001 [2024-12-13 03:45:54.779675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:57.001 [2024-12-13 03:45:54.779692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.001 [2024-12-13 03:45:54.779702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:57.001 [2024-12-13 03:45:54.779721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:91776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.001 [2024-12-13 03:45:54.779731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:57.001 [2024-12-13 03:45:54.779748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.001 [2024-12-13 03:45:54.779758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:57.001 [2024-12-13 03:45:54.779774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:91808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.001 [2024-12-13 03:45:54.779783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:57.001 [2024-12-13 03:45:54.779801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.001 [2024-12-13 03:45:54.779811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:57.001 [2024-12-13 03:45:54.781179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:91832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.001 [2024-12-13 03:45:54.781203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:57.001 [2024-12-13 03:45:54.781225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.001 [2024-12-13 03:45:54.781235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:57.001 [2024-12-13 03:45:54.781253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:91864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.001 [2024-12-13 03:45:54.781262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:57.001 [2024-12-13 03:45:54.781279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:91880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.001 [2024-12-13 03:45:54.781289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:57.001 [2024-12-13 03:45:54.781306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:91896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.001 [2024-12-13 03:45:54.781315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:57.001 [2024-12-13 03:45:54.781332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:91912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.001 [2024-12-13 03:45:54.781342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:57.001 [2024-12-13 03:45:54.781359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.001 [2024-12-13 03:45:54.781369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:57.001 [2024-12-13 03:45:54.781386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:91944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.001 [2024-12-13 03:45:54.781396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:57.001 [2024-12-13 03:45:54.781417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:91392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.001 [2024-12-13 03:45:54.781427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:57.001 [2024-12-13 03:45:54.781444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:91416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.001 [2024-12-13 03:45:54.781454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:57.001 [2024-12-13 03:45:54.781471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:91448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:57.001 [2024-12-13 03:45:54.781481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:57.001 [2024-12-13 03:45:54.781498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:91968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.001 [2024-12-13 03:45:54.781508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:57.001 [2024-12-13 03:45:54.781524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.001 [2024-12-13 03:45:54.781534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:57.002 [2024-12-13 03:45:54.781550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:92000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.002 [2024-12-13 03:45:54.781560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:57.002 [2024-12-13 03:45:54.781577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:92016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.002 [2024-12-13 03:45:54.781587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:57.002 [2024-12-13 03:45:54.781604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.002 [2024-12-13 03:45:54.781613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:57.002 [2024-12-13 03:45:54.781630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.002 [2024-12-13 03:45:54.781640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:57.002 [2024-12-13 03:45:54.781656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:92064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.002 [2024-12-13 03:45:54.781667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:57.002 [2024-12-13 03:45:54.781684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:92080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:57.002 [2024-12-13 03:45:54.781694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:57.002 9238.19 IOPS, 36.09 MiB/s [2024-12-13T02:45:58.211Z] 9254.79 IOPS, 36.15 MiB/s [2024-12-13T02:45:58.211Z] Received shutdown signal, test time was about 28.392923 seconds 00:34:57.002 00:34:57.002 Latency(us) 00:34:57.002 [2024-12-13T02:45:58.211Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:57.002 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:57.002 Verification LBA range: start 0x0 length 0x4000 00:34:57.002 Nvme0n1 : 28.39 9265.17 36.19 0.00 0.00 13791.51 862.11 3019898.88 00:34:57.002 [2024-12-13T02:45:58.211Z] =================================================================================================================== 00:34:57.002 [2024-12-13T02:45:58.211Z] Total : 9265.17 36.19 0.00 0.00 13791.51 862.11 3019898.88 00:34:57.002 03:45:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:57.002 03:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:34:57.002 03:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:57.002 03:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:34:57.002 03:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:57.002 03:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:34:57.002 03:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:57.002 03:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:34:57.002 03:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:57.002 03:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:57.002 rmmod nvme_tcp 00:34:57.002 rmmod nvme_fabrics 00:34:57.002 rmmod nvme_keyring 00:34:57.261 03:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:57.261 03:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:34:57.261 03:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:34:57.261 03:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2865237 ']' 00:34:57.261 03:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2865237 00:34:57.261 03:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2865237 ']' 00:34:57.261 03:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2865237 00:34:57.261 03:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:34:57.261 03:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:57.261 03:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2865237 00:34:57.261 03:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:57.261 03:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:57.261 03:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2865237' 00:34:57.261 killing process with pid 2865237 00:34:57.261 03:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2865237 00:34:57.261 03:45:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2865237 00:34:58.637 03:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:58.637 03:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:58.637 03:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:58.637 03:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:34:58.637 03:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:34:58.637 03:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:58.637 03:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:34:58.637 03:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:58.637 03:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:58.637 03:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:58.637 03:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:58.637 03:45:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:00.541 03:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:00.541 00:35:00.541 real 0m42.922s 00:35:00.541 user 1m55.907s 00:35:00.541 sys 0m10.976s 00:35:00.541 03:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:00.541 03:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:00.541 ************************************ 00:35:00.541 END TEST nvmf_host_multipath_status 00:35:00.541 ************************************ 00:35:00.541 03:46:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:00.541 03:46:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:00.541 03:46:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:00.541 03:46:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.541 ************************************ 00:35:00.541 START TEST nvmf_discovery_remove_ifc 00:35:00.541 ************************************ 00:35:00.541 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:00.800 * Looking for test storage... 00:35:00.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:00.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.800 --rc genhtml_branch_coverage=1 00:35:00.800 --rc genhtml_function_coverage=1 00:35:00.800 --rc genhtml_legend=1 00:35:00.800 --rc geninfo_all_blocks=1 00:35:00.800 --rc geninfo_unexecuted_blocks=1 00:35:00.800 00:35:00.800 ' 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:00.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.800 --rc genhtml_branch_coverage=1 00:35:00.800 --rc genhtml_function_coverage=1 00:35:00.800 --rc genhtml_legend=1 00:35:00.800 --rc geninfo_all_blocks=1 00:35:00.800 --rc geninfo_unexecuted_blocks=1 00:35:00.800 00:35:00.800 ' 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:00.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.800 --rc genhtml_branch_coverage=1 00:35:00.800 --rc genhtml_function_coverage=1 00:35:00.800 --rc genhtml_legend=1 00:35:00.800 --rc geninfo_all_blocks=1 00:35:00.800 --rc geninfo_unexecuted_blocks=1 00:35:00.800 00:35:00.800 ' 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:00.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.800 --rc genhtml_branch_coverage=1 00:35:00.800 --rc genhtml_function_coverage=1 00:35:00.800 --rc genhtml_legend=1 00:35:00.800 --rc geninfo_all_blocks=1 00:35:00.800 --rc geninfo_unexecuted_blocks=1 00:35:00.800 00:35:00.800 ' 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:00.800 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.801 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.801 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.801 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:35:00.801 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.801 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:35:00.801 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:00.801 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:00.801 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:00.801 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:00.801 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:00.801 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:00.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:00.801 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:00.801 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:00.801 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:00.801 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:35:00.801 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:35:00.801 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:35:00.801 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:35:00.801 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:35:00.801 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:35:00.801 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:35:00.801 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:00.801 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:00.801 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:00.801 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:00.801 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:00.801 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:00.801 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:00.801 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:00.801 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:00.801 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:00.801 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:35:00.801 03:46:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:06.078 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:06.078 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:06.078 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:06.079 Found net devices under 0000:af:00.0: cvl_0_0 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:06.079 Found net devices under 0000:af:00.1: cvl_0_1 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:06.079 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:06.338 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:06.338 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:06.338 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:06.338 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:06.338 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:06.338 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:06.338 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:06.338 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:06.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:06.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:35:06.338 00:35:06.338 --- 10.0.0.2 ping statistics --- 00:35:06.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:06.338 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:35:06.338 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:06.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:06.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:35:06.338 00:35:06.338 --- 10.0.0.1 ping statistics --- 00:35:06.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:06.338 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:35:06.338 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:06.338 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:35:06.338 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:06.338 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:06.338 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:06.338 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:06.338 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:06.338 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:06.338 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:06.338 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:35:06.338 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:06.338 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:06.338 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:06.338 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2874289 00:35:06.338 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2874289 00:35:06.338 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:06.338 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2874289 ']' 00:35:06.338 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:06.338 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:06.338 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:06.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:06.338 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:06.338 03:46:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:06.596 [2024-12-13 03:46:07.593880] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:35:06.596 [2024-12-13 03:46:07.593979] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:06.596 [2024-12-13 03:46:07.709720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:06.854 [2024-12-13 03:46:07.812680] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:06.854 [2024-12-13 03:46:07.812724] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:06.854 [2024-12-13 03:46:07.812734] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:06.854 [2024-12-13 03:46:07.812745] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:06.854 [2024-12-13 03:46:07.812753] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:06.854 [2024-12-13 03:46:07.814168] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:07.423 03:46:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:07.423 03:46:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:35:07.423 03:46:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:07.423 03:46:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:07.423 03:46:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:07.423 03:46:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:07.423 03:46:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:35:07.423 03:46:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.423 03:46:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:07.423 [2024-12-13 03:46:08.440979] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:07.423 [2024-12-13 03:46:08.449158] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:35:07.423 null0 00:35:07.423 [2024-12-13 03:46:08.481142] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:07.423 03:46:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.423 03:46:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2874523 00:35:07.423 03:46:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:35:07.423 03:46:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2874523 /tmp/host.sock 00:35:07.423 03:46:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2874523 ']' 00:35:07.423 03:46:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:35:07.423 03:46:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:07.423 03:46:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:35:07.423 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:35:07.423 03:46:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:07.423 03:46:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:07.423 [2024-12-13 03:46:08.579953] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:35:07.423 [2024-12-13 03:46:08.580059] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2874523 ] 00:35:07.729 [2024-12-13 03:46:08.692314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:07.729 [2024-12-13 03:46:08.799693] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:08.389 03:46:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:08.389 03:46:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:35:08.389 03:46:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:08.389 03:46:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:35:08.389 03:46:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.389 03:46:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:08.389 03:46:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.389 03:46:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:35:08.389 03:46:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.389 03:46:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:08.648 03:46:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.648 03:46:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:35:08.648 03:46:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.648 03:46:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:09.582 [2024-12-13 03:46:10.781583] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:09.582 [2024-12-13 03:46:10.781619] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:09.582 [2024-12-13 03:46:10.781644] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:09.840 [2024-12-13 03:46:10.909049] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:35:10.098 [2024-12-13 03:46:11.131476] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:35:10.098 [2024-12-13 03:46:11.132702] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x615000326200:1 started. 00:35:10.098 [2024-12-13 03:46:11.134310] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:10.098 [2024-12-13 03:46:11.134365] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:10.098 [2024-12-13 03:46:11.134428] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:10.098 [2024-12-13 03:46:11.134449] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:10.098 [2024-12-13 03:46:11.134482] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:10.098 03:46:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.098 03:46:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:35:10.098 03:46:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:10.098 03:46:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:10.098 03:46:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:10.098 [2024-12-13 03:46:11.140714] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x615000326200 was disconnected and freed. delete nvme_qpair. 00:35:10.098 03:46:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.098 03:46:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:10.098 03:46:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:10.098 03:46:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:10.098 03:46:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.098 03:46:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:35:10.098 03:46:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:35:10.098 03:46:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:35:10.356 03:46:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:35:10.356 03:46:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:10.356 03:46:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:10.356 03:46:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:10.356 03:46:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.356 03:46:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:10.356 03:46:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:10.356 03:46:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:10.356 03:46:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.356 03:46:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:10.356 03:46:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:11.289 03:46:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:11.289 03:46:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:11.289 03:46:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:11.289 03:46:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:11.289 03:46:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.289 03:46:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:11.289 03:46:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:11.289 03:46:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.289 03:46:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:11.289 03:46:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:12.662 03:46:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:12.662 03:46:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:12.662 03:46:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:12.662 03:46:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.662 03:46:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:12.662 03:46:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:12.662 03:46:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:12.662 03:46:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.663 03:46:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:12.663 03:46:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:13.597 03:46:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:13.597 03:46:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:13.597 03:46:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:13.597 03:46:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.597 03:46:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:13.597 03:46:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:13.597 03:46:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:13.597 03:46:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.597 03:46:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:13.597 03:46:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:14.529 03:46:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:14.529 03:46:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:14.529 03:46:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:14.529 03:46:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:14.529 03:46:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.529 03:46:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:14.529 03:46:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:14.529 03:46:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.529 03:46:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:14.529 03:46:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:15.463 [2024-12-13 03:46:16.575330] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:35:15.463 [2024-12-13 03:46:16.575393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:15.463 [2024-12-13 03:46:16.575408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:15.463 [2024-12-13 03:46:16.575421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:15.463 [2024-12-13 03:46:16.575430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:15.463 [2024-12-13 03:46:16.575441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:15.463 [2024-12-13 03:46:16.575450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:15.463 [2024-12-13 03:46:16.575460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:15.463 [2024-12-13 03:46:16.575469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:15.463 [2024-12-13 03:46:16.575479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:15.463 [2024-12-13 03:46:16.575494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:15.463 [2024-12-13 03:46:16.575503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325d00 is same with the state(6) to be set 00:35:15.463 [2024-12-13 03:46:16.585349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325d00 (9): Bad file descriptor 00:35:15.463 [2024-12-13 03:46:16.595385] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:35:15.463 [2024-12-13 03:46:16.595408] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:35:15.463 [2024-12-13 03:46:16.595417] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:15.463 [2024-12-13 03:46:16.595425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:15.463 [2024-12-13 03:46:16.595456] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:15.463 03:46:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:15.463 03:46:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:15.463 03:46:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:15.463 03:46:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:15.463 03:46:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.463 03:46:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:15.463 03:46:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:16.836 [2024-12-13 03:46:17.608940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:35:16.836 [2024-12-13 03:46:17.608994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325d00 with addr=10.0.0.2, port=4420 00:35:16.836 [2024-12-13 03:46:17.609017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325d00 is same with the state(6) to be set 00:35:16.836 [2024-12-13 03:46:17.609061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325d00 (9): Bad file descriptor 00:35:16.836 [2024-12-13 03:46:17.609684] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:35:16.836 [2024-12-13 03:46:17.609733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:16.836 [2024-12-13 03:46:17.609755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:16.836 [2024-12-13 03:46:17.609771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:16.836 [2024-12-13 03:46:17.609787] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:16.836 [2024-12-13 03:46:17.609800] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:16.836 [2024-12-13 03:46:17.609811] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:16.836 [2024-12-13 03:46:17.609827] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:16.836 [2024-12-13 03:46:17.609839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:16.836 03:46:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.836 03:46:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:16.836 03:46:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:17.769 [2024-12-13 03:46:18.612326] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:17.769 [2024-12-13 03:46:18.612354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:17.769 [2024-12-13 03:46:18.612369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:17.769 [2024-12-13 03:46:18.612378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:17.769 [2024-12-13 03:46:18.612389] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:35:17.769 [2024-12-13 03:46:18.612402] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:17.769 [2024-12-13 03:46:18.612409] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:17.769 [2024-12-13 03:46:18.612415] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:17.769 [2024-12-13 03:46:18.612446] bdev_nvme.c:7267:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:35:17.769 [2024-12-13 03:46:18.612474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:17.769 [2024-12-13 03:46:18.612488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.769 [2024-12-13 03:46:18.612500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:17.769 [2024-12-13 03:46:18.612510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.769 [2024-12-13 03:46:18.612520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:17.769 [2024-12-13 03:46:18.612530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.769 [2024-12-13 03:46:18.612540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:17.769 [2024-12-13 03:46:18.612552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.769 [2024-12-13 03:46:18.612563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:17.769 [2024-12-13 03:46:18.612572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.769 [2024-12-13 03:46:18.612581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:35:17.769 [2024-12-13 03:46:18.612618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325800 (9): Bad file descriptor 00:35:17.769 [2024-12-13 03:46:18.613615] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:35:17.769 [2024-12-13 03:46:18.613636] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:35:17.769 03:46:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:17.769 03:46:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:17.769 03:46:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:17.769 03:46:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.769 03:46:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:17.769 03:46:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:17.769 03:46:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:17.769 03:46:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.769 03:46:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:35:17.769 03:46:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:17.769 03:46:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:17.769 03:46:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:35:17.769 03:46:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:17.770 03:46:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:17.770 03:46:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:17.770 03:46:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.770 03:46:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:17.770 03:46:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:17.770 03:46:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:17.770 03:46:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.770 03:46:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:17.770 03:46:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:18.703 03:46:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:18.703 03:46:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:18.703 03:46:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:18.703 03:46:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:18.703 03:46:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.703 03:46:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:18.703 03:46:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:18.703 03:46:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.703 03:46:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:18.703 03:46:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:19.638 [2024-12-13 03:46:20.626371] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:19.638 [2024-12-13 03:46:20.626402] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:19.638 [2024-12-13 03:46:20.626435] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:19.638 [2024-12-13 03:46:20.752836] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:35:19.897 03:46:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:19.897 03:46:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:19.897 03:46:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:19.897 03:46:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.897 03:46:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:19.897 03:46:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:19.897 03:46:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:19.897 03:46:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.897 [2024-12-13 03:46:20.897804] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:35:19.897 [2024-12-13 03:46:20.898906] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x615000326e80:1 started. 00:35:19.897 [2024-12-13 03:46:20.900540] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:19.897 [2024-12-13 03:46:20.900587] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:19.897 [2024-12-13 03:46:20.900633] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:19.897 [2024-12-13 03:46:20.900653] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:35:19.897 [2024-12-13 03:46:20.900665] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:19.897 [2024-12-13 03:46:20.906094] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x615000326e80 was disconnected and freed. delete nvme_qpair. 00:35:19.897 03:46:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:19.897 03:46:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:20.831 03:46:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:20.831 03:46:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:20.831 03:46:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:20.831 03:46:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.831 03:46:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:20.831 03:46:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:20.831 03:46:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:20.831 03:46:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.831 03:46:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:35:20.831 03:46:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:35:20.831 03:46:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2874523 00:35:20.831 03:46:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2874523 ']' 00:35:20.831 03:46:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2874523 00:35:20.831 03:46:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:35:20.831 03:46:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:20.831 03:46:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2874523 00:35:20.831 03:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:20.831 03:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:20.831 03:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2874523' 00:35:20.831 killing process with pid 2874523 00:35:20.831 03:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2874523 00:35:20.831 03:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2874523 00:35:21.765 03:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:35:21.765 03:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:21.765 03:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:35:21.765 03:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:21.765 03:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:35:21.765 03:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:21.765 03:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:21.765 rmmod nvme_tcp 00:35:21.765 rmmod nvme_fabrics 00:35:22.023 rmmod nvme_keyring 00:35:22.023 03:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:22.023 03:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:35:22.023 03:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:35:22.023 03:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2874289 ']' 00:35:22.023 03:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2874289 00:35:22.023 03:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2874289 ']' 00:35:22.023 03:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2874289 00:35:22.023 03:46:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:35:22.023 03:46:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:22.023 03:46:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2874289 00:35:22.023 03:46:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:22.023 03:46:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:22.023 03:46:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2874289' 00:35:22.023 killing process with pid 2874289 00:35:22.023 03:46:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2874289 00:35:22.023 03:46:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2874289 00:35:22.960 03:46:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:22.960 03:46:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:22.960 03:46:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:22.960 03:46:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:35:22.960 03:46:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:35:22.960 03:46:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:22.960 03:46:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:35:22.960 03:46:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:22.960 03:46:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:22.960 03:46:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:22.960 03:46:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:22.960 03:46:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:25.493 03:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:25.493 00:35:25.493 real 0m24.486s 00:35:25.493 user 0m31.867s 00:35:25.493 sys 0m5.725s 00:35:25.493 03:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:25.493 03:46:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:25.493 ************************************ 00:35:25.493 END TEST nvmf_discovery_remove_ifc 00:35:25.493 ************************************ 00:35:25.493 03:46:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:25.493 03:46:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:25.493 03:46:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:25.493 03:46:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.493 ************************************ 00:35:25.493 START TEST nvmf_identify_kernel_target 00:35:25.493 ************************************ 00:35:25.493 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:25.493 * Looking for test storage... 00:35:25.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:25.493 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:25.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:25.494 --rc genhtml_branch_coverage=1 00:35:25.494 --rc genhtml_function_coverage=1 00:35:25.494 --rc genhtml_legend=1 00:35:25.494 --rc geninfo_all_blocks=1 00:35:25.494 --rc geninfo_unexecuted_blocks=1 00:35:25.494 00:35:25.494 ' 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:25.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:25.494 --rc genhtml_branch_coverage=1 00:35:25.494 --rc genhtml_function_coverage=1 00:35:25.494 --rc genhtml_legend=1 00:35:25.494 --rc geninfo_all_blocks=1 00:35:25.494 --rc geninfo_unexecuted_blocks=1 00:35:25.494 00:35:25.494 ' 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:25.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:25.494 --rc genhtml_branch_coverage=1 00:35:25.494 --rc genhtml_function_coverage=1 00:35:25.494 --rc genhtml_legend=1 00:35:25.494 --rc geninfo_all_blocks=1 00:35:25.494 --rc geninfo_unexecuted_blocks=1 00:35:25.494 00:35:25.494 ' 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:25.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:25.494 --rc genhtml_branch_coverage=1 00:35:25.494 --rc genhtml_function_coverage=1 00:35:25.494 --rc genhtml_legend=1 00:35:25.494 --rc geninfo_all_blocks=1 00:35:25.494 --rc geninfo_unexecuted_blocks=1 00:35:25.494 00:35:25.494 ' 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:25.494 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:25.495 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:25.495 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:25.495 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:25.495 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:25.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:25.495 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:25.495 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:25.495 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:25.495 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:35:25.495 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:25.495 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:25.495 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:25.495 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:25.495 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:25.495 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:25.495 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:25.495 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:25.495 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:25.495 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:25.495 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:35:25.495 03:46:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:30.761 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:30.761 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:35:30.761 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:30.761 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:30.761 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:30.761 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:30.761 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:30.761 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:35:30.761 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:30.761 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:35:30.761 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:35:30.761 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:35:30.761 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:30.762 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:30.762 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:30.762 Found net devices under 0000:af:00.0: cvl_0_0 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:30.762 Found net devices under 0000:af:00.1: cvl_0_1 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:30.762 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:30.762 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:35:30.762 00:35:30.762 --- 10.0.0.2 ping statistics --- 00:35:30.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:30.762 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:35:30.762 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:30.762 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:30.762 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:35:30.762 00:35:30.762 --- 10.0.0.1 ping statistics --- 00:35:30.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:30.763 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:35:30.763 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:30.763 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:35:30.763 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:30.763 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:30.763 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:30.763 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:30.763 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:30.763 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:30.763 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:30.763 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:35:30.763 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:35:30.763 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:35:30.763 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:30.763 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:30.763 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:30.763 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:30.763 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:30.763 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:30.763 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:30.763 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:30.763 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:30.763 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:35:30.763 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:30.763 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:30.763 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:30.763 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:30.763 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:30.763 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:30.763 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:35:30.763 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:30.763 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:30.763 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:30.763 03:46:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:33.295 Waiting for block devices as requested 00:35:33.295 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:33.295 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:33.295 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:33.553 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:33.553 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:33.553 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:33.553 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:33.813 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:33.813 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:33.813 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:34.071 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:34.071 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:34.071 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:34.071 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:34.328 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:34.329 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:34.329 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:34.587 03:46:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:34.587 03:46:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:34.587 03:46:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:34.587 03:46:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:34.587 03:46:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:34.587 03:46:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:34.587 03:46:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:34.587 03:46:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:34.587 03:46:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:34.587 No valid GPT data, bailing 00:35:34.587 03:46:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:34.587 03:46:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:35:34.587 03:46:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:35:34.587 03:46:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:34.587 03:46:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:34.587 03:46:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:34.587 03:46:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:34.587 03:46:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:34.587 03:46:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:34.587 03:46:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:35:34.587 03:46:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:34.587 03:46:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:35:34.587 03:46:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:34.587 03:46:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:35:34.587 03:46:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:35:34.587 03:46:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:35:34.587 03:46:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:34.587 03:46:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:35:34.587 00:35:34.587 Discovery Log Number of Records 2, Generation counter 2 00:35:34.587 =====Discovery Log Entry 0====== 00:35:34.587 trtype: tcp 00:35:34.587 adrfam: ipv4 00:35:34.587 subtype: current discovery subsystem 00:35:34.587 treq: not specified, sq flow control disable supported 00:35:34.587 portid: 1 00:35:34.587 trsvcid: 4420 00:35:34.587 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:34.587 traddr: 10.0.0.1 00:35:34.587 eflags: none 00:35:34.587 sectype: none 00:35:34.587 =====Discovery Log Entry 1====== 00:35:34.587 trtype: tcp 00:35:34.587 adrfam: ipv4 00:35:34.587 subtype: nvme subsystem 00:35:34.587 treq: not specified, sq flow control disable supported 00:35:34.587 portid: 1 00:35:34.587 trsvcid: 4420 00:35:34.587 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:34.587 traddr: 10.0.0.1 00:35:34.587 eflags: none 00:35:34.587 sectype: none 00:35:34.587 03:46:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:35:34.587 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:35:34.847 ===================================================== 00:35:34.847 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:35:34.847 ===================================================== 00:35:34.847 Controller Capabilities/Features 00:35:34.847 ================================ 00:35:34.847 Vendor ID: 0000 00:35:34.847 Subsystem Vendor ID: 0000 00:35:34.847 Serial Number: 6c4eca3bb60a02c4079e 00:35:34.847 Model Number: Linux 00:35:34.847 Firmware Version: 6.8.9-20 00:35:34.847 Recommended Arb Burst: 0 00:35:34.847 IEEE OUI Identifier: 00 00 00 00:35:34.847 Multi-path I/O 00:35:34.847 May have multiple subsystem ports: No 00:35:34.847 May have multiple controllers: No 00:35:34.847 Associated with SR-IOV VF: No 00:35:34.847 Max Data Transfer Size: Unlimited 00:35:34.847 Max Number of Namespaces: 0 00:35:34.847 Max Number of I/O Queues: 1024 00:35:34.847 NVMe Specification Version (VS): 1.3 00:35:34.847 NVMe Specification Version (Identify): 1.3 00:35:34.847 Maximum Queue Entries: 1024 00:35:34.847 Contiguous Queues Required: No 00:35:34.847 Arbitration Mechanisms Supported 00:35:34.847 Weighted Round Robin: Not Supported 00:35:34.847 Vendor Specific: Not Supported 00:35:34.847 Reset Timeout: 7500 ms 00:35:34.847 Doorbell Stride: 4 bytes 00:35:34.847 NVM Subsystem Reset: Not Supported 00:35:34.847 Command Sets Supported 00:35:34.847 NVM Command Set: Supported 00:35:34.847 Boot Partition: Not Supported 00:35:34.847 Memory Page Size Minimum: 4096 bytes 00:35:34.847 Memory Page Size Maximum: 4096 bytes 00:35:34.847 Persistent Memory Region: Not Supported 00:35:34.847 Optional Asynchronous Events Supported 00:35:34.847 Namespace Attribute Notices: Not Supported 00:35:34.847 Firmware Activation Notices: Not Supported 00:35:34.847 ANA Change Notices: Not Supported 00:35:34.847 PLE Aggregate Log Change Notices: Not Supported 00:35:34.847 LBA Status Info Alert Notices: Not Supported 00:35:34.847 EGE Aggregate Log Change Notices: Not Supported 00:35:34.847 Normal NVM Subsystem Shutdown event: Not Supported 00:35:34.847 Zone Descriptor Change Notices: Not Supported 00:35:34.847 Discovery Log Change Notices: Supported 00:35:34.847 Controller Attributes 00:35:34.847 128-bit Host Identifier: Not Supported 00:35:34.847 Non-Operational Permissive Mode: Not Supported 00:35:34.847 NVM Sets: Not Supported 00:35:34.847 Read Recovery Levels: Not Supported 00:35:34.847 Endurance Groups: Not Supported 00:35:34.847 Predictable Latency Mode: Not Supported 00:35:34.847 Traffic Based Keep ALive: Not Supported 00:35:34.847 Namespace Granularity: Not Supported 00:35:34.847 SQ Associations: Not Supported 00:35:34.847 UUID List: Not Supported 00:35:34.847 Multi-Domain Subsystem: Not Supported 00:35:34.847 Fixed Capacity Management: Not Supported 00:35:34.847 Variable Capacity Management: Not Supported 00:35:34.847 Delete Endurance Group: Not Supported 00:35:34.847 Delete NVM Set: Not Supported 00:35:34.847 Extended LBA Formats Supported: Not Supported 00:35:34.847 Flexible Data Placement Supported: Not Supported 00:35:34.847 00:35:34.847 Controller Memory Buffer Support 00:35:34.847 ================================ 00:35:34.847 Supported: No 00:35:34.847 00:35:34.847 Persistent Memory Region Support 00:35:34.847 ================================ 00:35:34.847 Supported: No 00:35:34.847 00:35:34.847 Admin Command Set Attributes 00:35:34.847 ============================ 00:35:34.847 Security Send/Receive: Not Supported 00:35:34.847 Format NVM: Not Supported 00:35:34.847 Firmware Activate/Download: Not Supported 00:35:34.847 Namespace Management: Not Supported 00:35:34.847 Device Self-Test: Not Supported 00:35:34.847 Directives: Not Supported 00:35:34.847 NVMe-MI: Not Supported 00:35:34.847 Virtualization Management: Not Supported 00:35:34.847 Doorbell Buffer Config: Not Supported 00:35:34.847 Get LBA Status Capability: Not Supported 00:35:34.847 Command & Feature Lockdown Capability: Not Supported 00:35:34.847 Abort Command Limit: 1 00:35:34.848 Async Event Request Limit: 1 00:35:34.848 Number of Firmware Slots: N/A 00:35:34.848 Firmware Slot 1 Read-Only: N/A 00:35:34.848 Firmware Activation Without Reset: N/A 00:35:34.848 Multiple Update Detection Support: N/A 00:35:34.848 Firmware Update Granularity: No Information Provided 00:35:34.848 Per-Namespace SMART Log: No 00:35:34.848 Asymmetric Namespace Access Log Page: Not Supported 00:35:34.848 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:35:34.848 Command Effects Log Page: Not Supported 00:35:34.848 Get Log Page Extended Data: Supported 00:35:34.848 Telemetry Log Pages: Not Supported 00:35:34.848 Persistent Event Log Pages: Not Supported 00:35:34.848 Supported Log Pages Log Page: May Support 00:35:34.848 Commands Supported & Effects Log Page: Not Supported 00:35:34.848 Feature Identifiers & Effects Log Page:May Support 00:35:34.848 NVMe-MI Commands & Effects Log Page: May Support 00:35:34.848 Data Area 4 for Telemetry Log: Not Supported 00:35:34.848 Error Log Page Entries Supported: 1 00:35:34.848 Keep Alive: Not Supported 00:35:34.848 00:35:34.848 NVM Command Set Attributes 00:35:34.848 ========================== 00:35:34.848 Submission Queue Entry Size 00:35:34.848 Max: 1 00:35:34.848 Min: 1 00:35:34.848 Completion Queue Entry Size 00:35:34.848 Max: 1 00:35:34.848 Min: 1 00:35:34.848 Number of Namespaces: 0 00:35:34.848 Compare Command: Not Supported 00:35:34.848 Write Uncorrectable Command: Not Supported 00:35:34.848 Dataset Management Command: Not Supported 00:35:34.848 Write Zeroes Command: Not Supported 00:35:34.848 Set Features Save Field: Not Supported 00:35:34.848 Reservations: Not Supported 00:35:34.848 Timestamp: Not Supported 00:35:34.848 Copy: Not Supported 00:35:34.848 Volatile Write Cache: Not Present 00:35:34.848 Atomic Write Unit (Normal): 1 00:35:34.848 Atomic Write Unit (PFail): 1 00:35:34.848 Atomic Compare & Write Unit: 1 00:35:34.848 Fused Compare & Write: Not Supported 00:35:34.848 Scatter-Gather List 00:35:34.848 SGL Command Set: Supported 00:35:34.848 SGL Keyed: Not Supported 00:35:34.848 SGL Bit Bucket Descriptor: Not Supported 00:35:34.848 SGL Metadata Pointer: Not Supported 00:35:34.848 Oversized SGL: Not Supported 00:35:34.848 SGL Metadata Address: Not Supported 00:35:34.848 SGL Offset: Supported 00:35:34.848 Transport SGL Data Block: Not Supported 00:35:34.848 Replay Protected Memory Block: Not Supported 00:35:34.848 00:35:34.848 Firmware Slot Information 00:35:34.848 ========================= 00:35:34.848 Active slot: 0 00:35:34.848 00:35:34.848 00:35:34.848 Error Log 00:35:34.848 ========= 00:35:34.848 00:35:34.848 Active Namespaces 00:35:34.848 ================= 00:35:34.848 Discovery Log Page 00:35:34.848 ================== 00:35:34.848 Generation Counter: 2 00:35:34.848 Number of Records: 2 00:35:34.848 Record Format: 0 00:35:34.848 00:35:34.848 Discovery Log Entry 0 00:35:34.848 ---------------------- 00:35:34.848 Transport Type: 3 (TCP) 00:35:34.848 Address Family: 1 (IPv4) 00:35:34.848 Subsystem Type: 3 (Current Discovery Subsystem) 00:35:34.848 Entry Flags: 00:35:34.848 Duplicate Returned Information: 0 00:35:34.848 Explicit Persistent Connection Support for Discovery: 0 00:35:34.848 Transport Requirements: 00:35:34.848 Secure Channel: Not Specified 00:35:34.848 Port ID: 1 (0x0001) 00:35:34.848 Controller ID: 65535 (0xffff) 00:35:34.848 Admin Max SQ Size: 32 00:35:34.848 Transport Service Identifier: 4420 00:35:34.848 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:35:34.848 Transport Address: 10.0.0.1 00:35:34.848 Discovery Log Entry 1 00:35:34.848 ---------------------- 00:35:34.848 Transport Type: 3 (TCP) 00:35:34.848 Address Family: 1 (IPv4) 00:35:34.848 Subsystem Type: 2 (NVM Subsystem) 00:35:34.848 Entry Flags: 00:35:34.848 Duplicate Returned Information: 0 00:35:34.848 Explicit Persistent Connection Support for Discovery: 0 00:35:34.848 Transport Requirements: 00:35:34.848 Secure Channel: Not Specified 00:35:34.848 Port ID: 1 (0x0001) 00:35:34.848 Controller ID: 65535 (0xffff) 00:35:34.848 Admin Max SQ Size: 32 00:35:34.848 Transport Service Identifier: 4420 00:35:34.848 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:35:34.848 Transport Address: 10.0.0.1 00:35:34.848 03:46:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:34.848 get_feature(0x01) failed 00:35:34.848 get_feature(0x02) failed 00:35:34.848 get_feature(0x04) failed 00:35:34.848 ===================================================== 00:35:34.848 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:34.848 ===================================================== 00:35:34.848 Controller Capabilities/Features 00:35:34.848 ================================ 00:35:34.848 Vendor ID: 0000 00:35:34.848 Subsystem Vendor ID: 0000 00:35:34.848 Serial Number: 8728d240551836399c3d 00:35:34.848 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:35:34.848 Firmware Version: 6.8.9-20 00:35:34.848 Recommended Arb Burst: 6 00:35:34.848 IEEE OUI Identifier: 00 00 00 00:35:34.848 Multi-path I/O 00:35:34.848 May have multiple subsystem ports: Yes 00:35:34.848 May have multiple controllers: Yes 00:35:34.848 Associated with SR-IOV VF: No 00:35:34.848 Max Data Transfer Size: Unlimited 00:35:34.848 Max Number of Namespaces: 1024 00:35:34.848 Max Number of I/O Queues: 128 00:35:34.848 NVMe Specification Version (VS): 1.3 00:35:34.848 NVMe Specification Version (Identify): 1.3 00:35:34.848 Maximum Queue Entries: 1024 00:35:34.848 Contiguous Queues Required: No 00:35:34.848 Arbitration Mechanisms Supported 00:35:34.848 Weighted Round Robin: Not Supported 00:35:34.848 Vendor Specific: Not Supported 00:35:34.848 Reset Timeout: 7500 ms 00:35:34.848 Doorbell Stride: 4 bytes 00:35:34.848 NVM Subsystem Reset: Not Supported 00:35:34.848 Command Sets Supported 00:35:34.848 NVM Command Set: Supported 00:35:34.848 Boot Partition: Not Supported 00:35:34.848 Memory Page Size Minimum: 4096 bytes 00:35:34.848 Memory Page Size Maximum: 4096 bytes 00:35:34.848 Persistent Memory Region: Not Supported 00:35:34.848 Optional Asynchronous Events Supported 00:35:34.848 Namespace Attribute Notices: Supported 00:35:34.848 Firmware Activation Notices: Not Supported 00:35:34.848 ANA Change Notices: Supported 00:35:34.848 PLE Aggregate Log Change Notices: Not Supported 00:35:34.848 LBA Status Info Alert Notices: Not Supported 00:35:34.848 EGE Aggregate Log Change Notices: Not Supported 00:35:34.848 Normal NVM Subsystem Shutdown event: Not Supported 00:35:34.848 Zone Descriptor Change Notices: Not Supported 00:35:34.848 Discovery Log Change Notices: Not Supported 00:35:34.848 Controller Attributes 00:35:34.848 128-bit Host Identifier: Supported 00:35:34.848 Non-Operational Permissive Mode: Not Supported 00:35:34.848 NVM Sets: Not Supported 00:35:34.848 Read Recovery Levels: Not Supported 00:35:34.848 Endurance Groups: Not Supported 00:35:34.848 Predictable Latency Mode: Not Supported 00:35:34.848 Traffic Based Keep ALive: Supported 00:35:34.848 Namespace Granularity: Not Supported 00:35:34.848 SQ Associations: Not Supported 00:35:34.848 UUID List: Not Supported 00:35:34.848 Multi-Domain Subsystem: Not Supported 00:35:34.848 Fixed Capacity Management: Not Supported 00:35:34.849 Variable Capacity Management: Not Supported 00:35:34.849 Delete Endurance Group: Not Supported 00:35:34.849 Delete NVM Set: Not Supported 00:35:34.849 Extended LBA Formats Supported: Not Supported 00:35:34.849 Flexible Data Placement Supported: Not Supported 00:35:34.849 00:35:34.849 Controller Memory Buffer Support 00:35:34.849 ================================ 00:35:34.849 Supported: No 00:35:34.849 00:35:34.849 Persistent Memory Region Support 00:35:34.849 ================================ 00:35:34.849 Supported: No 00:35:34.849 00:35:34.849 Admin Command Set Attributes 00:35:34.849 ============================ 00:35:34.849 Security Send/Receive: Not Supported 00:35:34.849 Format NVM: Not Supported 00:35:34.849 Firmware Activate/Download: Not Supported 00:35:34.849 Namespace Management: Not Supported 00:35:34.849 Device Self-Test: Not Supported 00:35:34.849 Directives: Not Supported 00:35:34.849 NVMe-MI: Not Supported 00:35:34.849 Virtualization Management: Not Supported 00:35:34.849 Doorbell Buffer Config: Not Supported 00:35:34.849 Get LBA Status Capability: Not Supported 00:35:34.849 Command & Feature Lockdown Capability: Not Supported 00:35:34.849 Abort Command Limit: 4 00:35:34.849 Async Event Request Limit: 4 00:35:34.849 Number of Firmware Slots: N/A 00:35:34.849 Firmware Slot 1 Read-Only: N/A 00:35:34.849 Firmware Activation Without Reset: N/A 00:35:34.849 Multiple Update Detection Support: N/A 00:35:34.849 Firmware Update Granularity: No Information Provided 00:35:34.849 Per-Namespace SMART Log: Yes 00:35:34.849 Asymmetric Namespace Access Log Page: Supported 00:35:34.849 ANA Transition Time : 10 sec 00:35:34.849 00:35:34.849 Asymmetric Namespace Access Capabilities 00:35:34.849 ANA Optimized State : Supported 00:35:34.849 ANA Non-Optimized State : Supported 00:35:34.849 ANA Inaccessible State : Supported 00:35:34.849 ANA Persistent Loss State : Supported 00:35:34.849 ANA Change State : Supported 00:35:34.849 ANAGRPID is not changed : No 00:35:34.849 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:35:34.849 00:35:34.849 ANA Group Identifier Maximum : 128 00:35:34.849 Number of ANA Group Identifiers : 128 00:35:34.849 Max Number of Allowed Namespaces : 1024 00:35:34.849 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:35:34.849 Command Effects Log Page: Supported 00:35:34.849 Get Log Page Extended Data: Supported 00:35:34.849 Telemetry Log Pages: Not Supported 00:35:34.849 Persistent Event Log Pages: Not Supported 00:35:34.849 Supported Log Pages Log Page: May Support 00:35:34.849 Commands Supported & Effects Log Page: Not Supported 00:35:34.849 Feature Identifiers & Effects Log Page:May Support 00:35:34.849 NVMe-MI Commands & Effects Log Page: May Support 00:35:34.849 Data Area 4 for Telemetry Log: Not Supported 00:35:34.849 Error Log Page Entries Supported: 128 00:35:34.849 Keep Alive: Supported 00:35:34.849 Keep Alive Granularity: 1000 ms 00:35:34.849 00:35:34.849 NVM Command Set Attributes 00:35:34.849 ========================== 00:35:34.849 Submission Queue Entry Size 00:35:34.849 Max: 64 00:35:34.849 Min: 64 00:35:34.849 Completion Queue Entry Size 00:35:34.849 Max: 16 00:35:34.849 Min: 16 00:35:34.849 Number of Namespaces: 1024 00:35:34.849 Compare Command: Not Supported 00:35:34.849 Write Uncorrectable Command: Not Supported 00:35:34.849 Dataset Management Command: Supported 00:35:34.849 Write Zeroes Command: Supported 00:35:34.849 Set Features Save Field: Not Supported 00:35:34.849 Reservations: Not Supported 00:35:34.849 Timestamp: Not Supported 00:35:34.849 Copy: Not Supported 00:35:34.849 Volatile Write Cache: Present 00:35:34.849 Atomic Write Unit (Normal): 1 00:35:34.849 Atomic Write Unit (PFail): 1 00:35:34.849 Atomic Compare & Write Unit: 1 00:35:34.849 Fused Compare & Write: Not Supported 00:35:34.849 Scatter-Gather List 00:35:34.849 SGL Command Set: Supported 00:35:34.849 SGL Keyed: Not Supported 00:35:34.849 SGL Bit Bucket Descriptor: Not Supported 00:35:34.849 SGL Metadata Pointer: Not Supported 00:35:34.849 Oversized SGL: Not Supported 00:35:34.849 SGL Metadata Address: Not Supported 00:35:34.849 SGL Offset: Supported 00:35:34.849 Transport SGL Data Block: Not Supported 00:35:34.849 Replay Protected Memory Block: Not Supported 00:35:34.849 00:35:34.849 Firmware Slot Information 00:35:34.849 ========================= 00:35:34.849 Active slot: 0 00:35:34.849 00:35:34.849 Asymmetric Namespace Access 00:35:34.849 =========================== 00:35:34.849 Change Count : 0 00:35:34.849 Number of ANA Group Descriptors : 1 00:35:34.849 ANA Group Descriptor : 0 00:35:34.849 ANA Group ID : 1 00:35:34.849 Number of NSID Values : 1 00:35:34.849 Change Count : 0 00:35:34.849 ANA State : 1 00:35:34.849 Namespace Identifier : 1 00:35:34.849 00:35:34.849 Commands Supported and Effects 00:35:34.849 ============================== 00:35:34.849 Admin Commands 00:35:34.849 -------------- 00:35:34.849 Get Log Page (02h): Supported 00:35:34.849 Identify (06h): Supported 00:35:34.849 Abort (08h): Supported 00:35:34.849 Set Features (09h): Supported 00:35:34.849 Get Features (0Ah): Supported 00:35:34.849 Asynchronous Event Request (0Ch): Supported 00:35:34.849 Keep Alive (18h): Supported 00:35:34.849 I/O Commands 00:35:34.849 ------------ 00:35:34.849 Flush (00h): Supported 00:35:34.849 Write (01h): Supported LBA-Change 00:35:34.849 Read (02h): Supported 00:35:34.849 Write Zeroes (08h): Supported LBA-Change 00:35:34.849 Dataset Management (09h): Supported 00:35:34.849 00:35:34.849 Error Log 00:35:34.849 ========= 00:35:34.849 Entry: 0 00:35:34.849 Error Count: 0x3 00:35:34.849 Submission Queue Id: 0x0 00:35:34.849 Command Id: 0x5 00:35:34.849 Phase Bit: 0 00:35:34.849 Status Code: 0x2 00:35:34.849 Status Code Type: 0x0 00:35:34.849 Do Not Retry: 1 00:35:34.849 Error Location: 0x28 00:35:34.849 LBA: 0x0 00:35:34.849 Namespace: 0x0 00:35:34.849 Vendor Log Page: 0x0 00:35:34.849 ----------- 00:35:34.849 Entry: 1 00:35:34.849 Error Count: 0x2 00:35:34.849 Submission Queue Id: 0x0 00:35:34.849 Command Id: 0x5 00:35:34.849 Phase Bit: 0 00:35:34.849 Status Code: 0x2 00:35:34.849 Status Code Type: 0x0 00:35:34.849 Do Not Retry: 1 00:35:34.849 Error Location: 0x28 00:35:34.849 LBA: 0x0 00:35:34.849 Namespace: 0x0 00:35:34.849 Vendor Log Page: 0x0 00:35:34.849 ----------- 00:35:34.849 Entry: 2 00:35:34.849 Error Count: 0x1 00:35:34.849 Submission Queue Id: 0x0 00:35:34.849 Command Id: 0x4 00:35:34.849 Phase Bit: 0 00:35:34.849 Status Code: 0x2 00:35:34.849 Status Code Type: 0x0 00:35:34.849 Do Not Retry: 1 00:35:34.849 Error Location: 0x28 00:35:34.849 LBA: 0x0 00:35:34.849 Namespace: 0x0 00:35:34.849 Vendor Log Page: 0x0 00:35:34.849 00:35:34.849 Number of Queues 00:35:34.849 ================ 00:35:34.850 Number of I/O Submission Queues: 128 00:35:34.850 Number of I/O Completion Queues: 128 00:35:34.850 00:35:34.850 ZNS Specific Controller Data 00:35:34.850 ============================ 00:35:34.850 Zone Append Size Limit: 0 00:35:34.850 00:35:34.850 00:35:34.850 Active Namespaces 00:35:34.850 ================= 00:35:34.850 get_feature(0x05) failed 00:35:34.850 Namespace ID:1 00:35:34.850 Command Set Identifier: NVM (00h) 00:35:34.850 Deallocate: Supported 00:35:34.850 Deallocated/Unwritten Error: Not Supported 00:35:34.850 Deallocated Read Value: Unknown 00:35:34.850 Deallocate in Write Zeroes: Not Supported 00:35:34.850 Deallocated Guard Field: 0xFFFF 00:35:34.850 Flush: Supported 00:35:34.850 Reservation: Not Supported 00:35:34.850 Namespace Sharing Capabilities: Multiple Controllers 00:35:34.850 Size (in LBAs): 1953525168 (931GiB) 00:35:34.850 Capacity (in LBAs): 1953525168 (931GiB) 00:35:34.850 Utilization (in LBAs): 1953525168 (931GiB) 00:35:34.850 UUID: dc45da93-ccfa-4484-86ec-c69db714a9ed 00:35:34.850 Thin Provisioning: Not Supported 00:35:34.850 Per-NS Atomic Units: Yes 00:35:34.850 Atomic Boundary Size (Normal): 0 00:35:34.850 Atomic Boundary Size (PFail): 0 00:35:34.850 Atomic Boundary Offset: 0 00:35:34.850 NGUID/EUI64 Never Reused: No 00:35:34.850 ANA group ID: 1 00:35:34.850 Namespace Write Protected: No 00:35:34.850 Number of LBA Formats: 1 00:35:34.850 Current LBA Format: LBA Format #00 00:35:34.850 LBA Format #00: Data Size: 512 Metadata Size: 0 00:35:34.850 00:35:34.850 03:46:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:35:34.850 03:46:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:34.850 03:46:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:35:34.850 03:46:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:34.850 03:46:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:35:34.850 03:46:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:34.850 03:46:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:34.850 rmmod nvme_tcp 00:35:34.850 rmmod nvme_fabrics 00:35:34.850 03:46:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:34.850 03:46:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:35:34.850 03:46:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:35:34.850 03:46:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:35:34.850 03:46:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:34.850 03:46:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:34.850 03:46:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:34.850 03:46:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:35:35.108 03:46:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:35:35.108 03:46:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:35.108 03:46:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:35:35.108 03:46:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:35.108 03:46:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:35.108 03:46:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:35.108 03:46:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:35.108 03:46:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:37.012 03:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:37.012 03:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:35:37.012 03:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:37.012 03:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:35:37.012 03:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:37.012 03:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:37.013 03:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:37.013 03:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:37.013 03:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:37.013 03:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:37.013 03:46:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:39.544 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:39.544 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:39.544 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:39.544 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:39.544 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:39.544 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:39.544 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:39.544 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:39.544 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:39.544 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:39.544 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:39.544 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:39.544 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:39.544 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:39.544 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:39.544 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:40.480 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:40.480 00:35:40.480 real 0m15.186s 00:35:40.480 user 0m3.836s 00:35:40.480 sys 0m7.697s 00:35:40.480 03:46:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:40.480 03:46:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:40.480 ************************************ 00:35:40.480 END TEST nvmf_identify_kernel_target 00:35:40.480 ************************************ 00:35:40.480 03:46:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:40.480 03:46:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:40.480 03:46:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:40.480 03:46:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.480 ************************************ 00:35:40.480 START TEST nvmf_auth_host 00:35:40.480 ************************************ 00:35:40.480 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:40.480 * Looking for test storage... 00:35:40.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:40.480 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:40.480 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:35:40.480 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:40.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:40.740 --rc genhtml_branch_coverage=1 00:35:40.740 --rc genhtml_function_coverage=1 00:35:40.740 --rc genhtml_legend=1 00:35:40.740 --rc geninfo_all_blocks=1 00:35:40.740 --rc geninfo_unexecuted_blocks=1 00:35:40.740 00:35:40.740 ' 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:40.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:40.740 --rc genhtml_branch_coverage=1 00:35:40.740 --rc genhtml_function_coverage=1 00:35:40.740 --rc genhtml_legend=1 00:35:40.740 --rc geninfo_all_blocks=1 00:35:40.740 --rc geninfo_unexecuted_blocks=1 00:35:40.740 00:35:40.740 ' 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:40.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:40.740 --rc genhtml_branch_coverage=1 00:35:40.740 --rc genhtml_function_coverage=1 00:35:40.740 --rc genhtml_legend=1 00:35:40.740 --rc geninfo_all_blocks=1 00:35:40.740 --rc geninfo_unexecuted_blocks=1 00:35:40.740 00:35:40.740 ' 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:40.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:40.740 --rc genhtml_branch_coverage=1 00:35:40.740 --rc genhtml_function_coverage=1 00:35:40.740 --rc genhtml_legend=1 00:35:40.740 --rc geninfo_all_blocks=1 00:35:40.740 --rc geninfo_unexecuted_blocks=1 00:35:40.740 00:35:40.740 ' 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:40.740 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:40.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:35:40.741 03:46:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.012 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:46.012 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:35:46.012 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:46.012 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:46.012 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:46.012 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:46.012 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:46.012 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:35:46.012 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:46.012 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:35:46.012 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:35:46.012 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:35:46.012 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:35:46.012 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:35:46.012 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:35:46.012 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:46.012 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:46.012 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:46.012 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:46.012 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:46.012 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:46.012 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:46.012 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:46.012 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:46.012 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:46.012 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:46.012 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:46.012 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:46.012 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:46.012 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:46.012 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:46.013 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:46.013 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:46.013 Found net devices under 0000:af:00.0: cvl_0_0 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:46.013 Found net devices under 0000:af:00.1: cvl_0_1 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:46.013 03:46:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:46.013 03:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:46.013 03:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:46.013 03:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:46.013 03:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:46.013 03:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:46.013 03:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:46.013 03:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:46.013 03:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:46.013 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:46.013 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:35:46.013 00:35:46.013 --- 10.0.0.2 ping statistics --- 00:35:46.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:46.013 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:35:46.013 03:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:46.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:46.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:35:46.272 00:35:46.272 --- 10.0.0.1 ping statistics --- 00:35:46.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:46.272 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:35:46.272 03:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:46.272 03:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:35:46.272 03:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:46.272 03:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:46.272 03:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:46.272 03:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:46.272 03:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:46.272 03:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:46.272 03:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:46.272 03:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:35:46.273 03:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:46.273 03:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:46.273 03:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.273 03:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:35:46.273 03:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2886539 00:35:46.273 03:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2886539 00:35:46.273 03:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2886539 ']' 00:35:46.273 03:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:46.273 03:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:46.273 03:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:46.273 03:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:46.273 03:46:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d34232369a83986bdd85688940780044 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.I6b 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d34232369a83986bdd85688940780044 0 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d34232369a83986bdd85688940780044 0 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d34232369a83986bdd85688940780044 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.I6b 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.I6b 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.I6b 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=46f77ee60a1e80d712251fb97eebb3cb2bee149d84677bb75f61fc555a39db9f 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.xZF 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 46f77ee60a1e80d712251fb97eebb3cb2bee149d84677bb75f61fc555a39db9f 3 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 46f77ee60a1e80d712251fb97eebb3cb2bee149d84677bb75f61fc555a39db9f 3 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=46f77ee60a1e80d712251fb97eebb3cb2bee149d84677bb75f61fc555a39db9f 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.xZF 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.xZF 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.xZF 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1daa1688e8496a2fe688c79b4e9299511773a2b8a19d3199 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.oYs 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1daa1688e8496a2fe688c79b4e9299511773a2b8a19d3199 0 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1daa1688e8496a2fe688c79b4e9299511773a2b8a19d3199 0 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1daa1688e8496a2fe688c79b4e9299511773a2b8a19d3199 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.oYs 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.oYs 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.oYs 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bfccda2be795c73f8a400dce6ff41a0fe657188b26f348fe 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.32k 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bfccda2be795c73f8a400dce6ff41a0fe657188b26f348fe 2 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bfccda2be795c73f8a400dce6ff41a0fe657188b26f348fe 2 00:35:47.210 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:47.211 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:47.211 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bfccda2be795c73f8a400dce6ff41a0fe657188b26f348fe 00:35:47.211 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:35:47.211 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:47.211 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.32k 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.32k 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.32k 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b347b4957ff4906cd5f0cc5d0d6066af 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Ayk 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b347b4957ff4906cd5f0cc5d0d6066af 1 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b347b4957ff4906cd5f0cc5d0d6066af 1 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b347b4957ff4906cd5f0cc5d0d6066af 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Ayk 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Ayk 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Ayk 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=eeebf984df9316f41d7a8eb48f07e99d 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Hiq 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key eeebf984df9316f41d7a8eb48f07e99d 1 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 eeebf984df9316f41d7a8eb48f07e99d 1 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=eeebf984df9316f41d7a8eb48f07e99d 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Hiq 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Hiq 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Hiq 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c4b58d3c573edbb82d3f756627596996252027ce30fba9e7 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.I15 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c4b58d3c573edbb82d3f756627596996252027ce30fba9e7 2 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c4b58d3c573edbb82d3f756627596996252027ce30fba9e7 2 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c4b58d3c573edbb82d3f756627596996252027ce30fba9e7 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.I15 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.I15 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.I15 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:47.470 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:47.471 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:47.471 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:35:47.471 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:47.471 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:47.471 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2a3706366a267f801f66ae4395cd47f7 00:35:47.471 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:35:47.471 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.6Jm 00:35:47.471 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2a3706366a267f801f66ae4395cd47f7 0 00:35:47.471 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2a3706366a267f801f66ae4395cd47f7 0 00:35:47.471 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:47.471 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:47.471 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2a3706366a267f801f66ae4395cd47f7 00:35:47.471 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:35:47.471 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:47.471 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.6Jm 00:35:47.471 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.6Jm 00:35:47.471 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.6Jm 00:35:47.471 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:35:47.471 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:47.471 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:47.471 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:47.471 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:35:47.471 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:35:47.471 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:47.471 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5a2e29900f45ff9e86ad3ddf8d57f75da37ea13b8f5c32dbb6a771ee10f44533 00:35:47.471 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:35:47.471 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.MKA 00:35:47.471 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5a2e29900f45ff9e86ad3ddf8d57f75da37ea13b8f5c32dbb6a771ee10f44533 3 00:35:47.471 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5a2e29900f45ff9e86ad3ddf8d57f75da37ea13b8f5c32dbb6a771ee10f44533 3 00:35:47.471 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:47.471 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:47.471 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5a2e29900f45ff9e86ad3ddf8d57f75da37ea13b8f5c32dbb6a771ee10f44533 00:35:47.471 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:35:47.471 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:47.730 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.MKA 00:35:47.730 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.MKA 00:35:47.730 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.MKA 00:35:47.730 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:35:47.730 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2886539 00:35:47.730 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2886539 ']' 00:35:47.730 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:47.730 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:47.730 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:47.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:47.730 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:47.730 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.730 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:47.730 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:35:47.730 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:47.730 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.I6b 00:35:47.730 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.730 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.730 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.730 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.xZF ]] 00:35:47.730 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xZF 00:35:47.730 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.730 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.730 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.730 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:47.730 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.oYs 00:35:47.730 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.730 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.730 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.730 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.32k ]] 00:35:47.730 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.32k 00:35:47.730 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.730 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.989 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.989 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:47.989 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Ayk 00:35:47.989 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.989 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.989 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.989 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Hiq ]] 00:35:47.989 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Hiq 00:35:47.989 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.989 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.989 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.989 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:47.989 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.I15 00:35:47.989 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.989 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.989 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.989 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.6Jm ]] 00:35:47.989 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.6Jm 00:35:47.989 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.989 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.989 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.989 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:47.989 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.MKA 00:35:47.989 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.989 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.990 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.990 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:35:47.990 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:35:47.990 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:35:47.990 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:47.990 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:47.990 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:47.990 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.990 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.990 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:47.990 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.990 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:47.990 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:47.990 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:47.990 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:35:47.990 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:35:47.990 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:47.990 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:47.990 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:47.990 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:47.990 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:35:47.990 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:47.990 03:46:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:47.990 03:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:47.990 03:46:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:50.520 Waiting for block devices as requested 00:35:50.520 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:50.520 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:50.520 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:50.779 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:50.779 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:50.779 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:50.779 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:51.038 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:51.038 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:51.038 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:51.038 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:51.296 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:51.296 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:51.296 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:51.555 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:51.555 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:51.555 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:52.124 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:52.124 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:52.124 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:52.124 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:52.124 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:52.124 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:52.124 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:52.124 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:52.124 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:52.124 No valid GPT data, bailing 00:35:52.124 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:52.124 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:35:52.124 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:35:52.124 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:52.124 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:52.124 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:52.124 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:52.124 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:52.124 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:35:52.124 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:35:52.124 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:52.124 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:35:52.124 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:52.124 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:35:52.124 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:35:52.124 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:35:52.124 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:52.383 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:35:52.383 00:35:52.383 Discovery Log Number of Records 2, Generation counter 2 00:35:52.383 =====Discovery Log Entry 0====== 00:35:52.383 trtype: tcp 00:35:52.383 adrfam: ipv4 00:35:52.383 subtype: current discovery subsystem 00:35:52.383 treq: not specified, sq flow control disable supported 00:35:52.383 portid: 1 00:35:52.383 trsvcid: 4420 00:35:52.383 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:52.383 traddr: 10.0.0.1 00:35:52.383 eflags: none 00:35:52.383 sectype: none 00:35:52.383 =====Discovery Log Entry 1====== 00:35:52.383 trtype: tcp 00:35:52.383 adrfam: ipv4 00:35:52.383 subtype: nvme subsystem 00:35:52.383 treq: not specified, sq flow control disable supported 00:35:52.383 portid: 1 00:35:52.383 trsvcid: 4420 00:35:52.383 subnqn: nqn.2024-02.io.spdk:cnode0 00:35:52.383 traddr: 10.0.0.1 00:35:52.383 eflags: none 00:35:52.383 sectype: none 00:35:52.383 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:52.383 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:35:52.383 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:52.383 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:52.383 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:52.383 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:52.383 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:52.383 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:52.383 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRhYTE2ODhlODQ5NmEyZmU2ODhjNzliNGU5Mjk5NTExNzczYTJiOGExOWQzMTk5cXstAw==: 00:35:52.383 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: 00:35:52.383 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:52.383 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:52.383 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRhYTE2ODhlODQ5NmEyZmU2ODhjNzliNGU5Mjk5NTExNzczYTJiOGExOWQzMTk5cXstAw==: 00:35:52.383 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: ]] 00:35:52.383 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: 00:35:52.383 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:52.383 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:35:52.383 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:52.383 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:52.383 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:35:52.383 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:52.383 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:35:52.383 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:52.383 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:52.383 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:52.383 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:52.383 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.383 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.383 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.383 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:52.383 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:52.383 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:52.383 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:52.383 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:52.383 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:52.383 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:52.384 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:52.384 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:52.384 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:52.384 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:52.384 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:52.384 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.384 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.643 nvme0n1 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDM0MjMyMzY5YTgzOTg2YmRkODU2ODg5NDA3ODAwNDQ8LBGi: 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDM0MjMyMzY5YTgzOTg2YmRkODU2ODg5NDA3ODAwNDQ8LBGi: 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: ]] 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.643 nvme0n1 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.643 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.902 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:52.902 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:52.902 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.902 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.902 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.902 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:52.902 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:52.902 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:52.902 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:52.902 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:52.902 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:52.902 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRhYTE2ODhlODQ5NmEyZmU2ODhjNzliNGU5Mjk5NTExNzczYTJiOGExOWQzMTk5cXstAw==: 00:35:52.902 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: 00:35:52.902 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:52.902 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:52.903 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRhYTE2ODhlODQ5NmEyZmU2ODhjNzliNGU5Mjk5NTExNzczYTJiOGExOWQzMTk5cXstAw==: 00:35:52.903 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: ]] 00:35:52.903 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: 00:35:52.903 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:35:52.903 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:52.903 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:52.903 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:52.903 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:52.903 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:52.903 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:52.903 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.903 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.903 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.903 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:52.903 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:52.903 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:52.903 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:52.903 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:52.903 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:52.903 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:52.903 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:52.903 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:52.903 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:52.903 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:52.903 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:52.903 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.903 03:46:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.903 nvme0n1 00:35:52.903 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.903 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:52.903 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:52.903 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.903 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.903 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.903 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:52.903 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:52.903 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.903 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.903 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.903 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:52.903 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:52.903 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:52.903 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:52.903 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:52.903 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:52.903 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjM0N2I0OTU3ZmY0OTA2Y2Q1ZjBjYzVkMGQ2MDY2YWZAfN2+: 00:35:52.903 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: 00:35:52.903 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjM0N2I0OTU3ZmY0OTA2Y2Q1ZjBjYzVkMGQ2MDY2YWZAfN2+: 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: ]] 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.163 nvme0n1 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNThkM2M1NzNlZGJiODJkM2Y3NTY2Mjc1OTY5OTYyNTIwMjdjZTMwZmJhOWU33khPZQ==: 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNThkM2M1NzNlZGJiODJkM2Y3NTY2Mjc1OTY5OTYyNTIwMjdjZTMwZmJhOWU33khPZQ==: 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: ]] 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:53.163 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:53.164 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:53.164 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:53.164 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:53.164 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.164 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.423 nvme0n1 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWEyZTI5OTAwZjQ1ZmY5ZTg2YWQzZGRmOGQ1N2Y3NWRhMzdlYTEzYjhmNWMzMmRiYjZhNzcxZWUxMGY0NDUzMzF2Z9c=: 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWEyZTI5OTAwZjQ1ZmY5ZTg2YWQzZGRmOGQ1N2Y3NWRhMzdlYTEzYjhmNWMzMmRiYjZhNzcxZWUxMGY0NDUzMzF2Z9c=: 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.423 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.682 nvme0n1 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDM0MjMyMzY5YTgzOTg2YmRkODU2ODg5NDA3ODAwNDQ8LBGi: 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDM0MjMyMzY5YTgzOTg2YmRkODU2ODg5NDA3ODAwNDQ8LBGi: 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: ]] 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:53.682 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:53.683 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:53.683 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:53.683 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:53.683 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:53.683 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.683 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.941 nvme0n1 00:35:53.941 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.941 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:53.941 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:53.941 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.941 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.941 03:46:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.941 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:53.941 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:53.941 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.941 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.941 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.941 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:53.941 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:35:53.941 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:53.941 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:53.942 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:53.942 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:53.942 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRhYTE2ODhlODQ5NmEyZmU2ODhjNzliNGU5Mjk5NTExNzczYTJiOGExOWQzMTk5cXstAw==: 00:35:53.942 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: 00:35:53.942 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:53.942 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:53.942 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRhYTE2ODhlODQ5NmEyZmU2ODhjNzliNGU5Mjk5NTExNzczYTJiOGExOWQzMTk5cXstAw==: 00:35:53.942 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: ]] 00:35:53.942 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: 00:35:53.942 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:35:53.942 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:53.942 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:53.942 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:53.942 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:53.942 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:53.942 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:53.942 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.942 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.942 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.942 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:53.942 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:53.942 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:53.942 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:53.942 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:53.942 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:53.942 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:53.942 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:53.942 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:53.942 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:53.942 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:53.942 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:53.942 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.942 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.200 nvme0n1 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjM0N2I0OTU3ZmY0OTA2Y2Q1ZjBjYzVkMGQ2MDY2YWZAfN2+: 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjM0N2I0OTU3ZmY0OTA2Y2Q1ZjBjYzVkMGQ2MDY2YWZAfN2+: 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: ]] 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.200 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.458 nvme0n1 00:35:54.458 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.458 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.458 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.458 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.458 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.458 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.458 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNThkM2M1NzNlZGJiODJkM2Y3NTY2Mjc1OTY5OTYyNTIwMjdjZTMwZmJhOWU33khPZQ==: 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNThkM2M1NzNlZGJiODJkM2Y3NTY2Mjc1OTY5OTYyNTIwMjdjZTMwZmJhOWU33khPZQ==: 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: ]] 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.459 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.718 nvme0n1 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWEyZTI5OTAwZjQ1ZmY5ZTg2YWQzZGRmOGQ1N2Y3NWRhMzdlYTEzYjhmNWMzMmRiYjZhNzcxZWUxMGY0NDUzMzF2Z9c=: 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWEyZTI5OTAwZjQ1ZmY5ZTg2YWQzZGRmOGQ1N2Y3NWRhMzdlYTEzYjhmNWMzMmRiYjZhNzcxZWUxMGY0NDUzMzF2Z9c=: 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.718 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.977 nvme0n1 00:35:54.977 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.977 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.977 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.977 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.977 03:46:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDM0MjMyMzY5YTgzOTg2YmRkODU2ODg5NDA3ODAwNDQ8LBGi: 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDM0MjMyMzY5YTgzOTg2YmRkODU2ODg5NDA3ODAwNDQ8LBGi: 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: ]] 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.977 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.236 nvme0n1 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRhYTE2ODhlODQ5NmEyZmU2ODhjNzliNGU5Mjk5NTExNzczYTJiOGExOWQzMTk5cXstAw==: 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRhYTE2ODhlODQ5NmEyZmU2ODhjNzliNGU5Mjk5NTExNzczYTJiOGExOWQzMTk5cXstAw==: 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: ]] 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:55.236 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:55.237 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:55.237 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:55.237 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:55.237 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:55.237 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.237 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.495 nvme0n1 00:35:55.495 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.495 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.495 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.495 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.495 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.495 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.495 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.495 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.495 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.495 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjM0N2I0OTU3ZmY0OTA2Y2Q1ZjBjYzVkMGQ2MDY2YWZAfN2+: 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjM0N2I0OTU3ZmY0OTA2Y2Q1ZjBjYzVkMGQ2MDY2YWZAfN2+: 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: ]] 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.753 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.012 nvme0n1 00:35:56.012 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.012 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.012 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.012 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.012 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.012 03:46:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNThkM2M1NzNlZGJiODJkM2Y3NTY2Mjc1OTY5OTYyNTIwMjdjZTMwZmJhOWU33khPZQ==: 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNThkM2M1NzNlZGJiODJkM2Y3NTY2Mjc1OTY5OTYyNTIwMjdjZTMwZmJhOWU33khPZQ==: 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: ]] 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.012 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.271 nvme0n1 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWEyZTI5OTAwZjQ1ZmY5ZTg2YWQzZGRmOGQ1N2Y3NWRhMzdlYTEzYjhmNWMzMmRiYjZhNzcxZWUxMGY0NDUzMzF2Z9c=: 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWEyZTI5OTAwZjQ1ZmY5ZTg2YWQzZGRmOGQ1N2Y3NWRhMzdlYTEzYjhmNWMzMmRiYjZhNzcxZWUxMGY0NDUzMzF2Z9c=: 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.271 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.530 nvme0n1 00:35:56.530 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.530 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.530 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.530 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.530 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.530 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.530 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.530 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.530 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.530 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.530 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.530 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:56.530 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.530 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:35:56.530 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.530 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:56.530 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:56.530 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:56.530 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDM0MjMyMzY5YTgzOTg2YmRkODU2ODg5NDA3ODAwNDQ8LBGi: 00:35:56.530 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: 00:35:56.530 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:56.530 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:56.530 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDM0MjMyMzY5YTgzOTg2YmRkODU2ODg5NDA3ODAwNDQ8LBGi: 00:35:56.530 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: ]] 00:35:56.530 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: 00:35:56.530 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:35:56.530 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.530 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:56.530 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:56.530 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:56.530 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.530 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:56.530 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.530 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.530 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.530 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.530 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:56.531 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:56.531 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:56.531 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.531 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.531 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:56.531 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.531 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:56.531 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:56.531 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:56.531 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:56.531 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.531 03:46:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.101 nvme0n1 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRhYTE2ODhlODQ5NmEyZmU2ODhjNzliNGU5Mjk5NTExNzczYTJiOGExOWQzMTk5cXstAw==: 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRhYTE2ODhlODQ5NmEyZmU2ODhjNzliNGU5Mjk5NTExNzczYTJiOGExOWQzMTk5cXstAw==: 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: ]] 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.101 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.453 nvme0n1 00:35:57.453 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.453 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.453 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.453 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.453 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.453 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.453 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.453 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.453 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.453 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.453 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.453 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.453 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:35:57.453 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.453 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:57.453 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:57.453 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:57.453 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjM0N2I0OTU3ZmY0OTA2Y2Q1ZjBjYzVkMGQ2MDY2YWZAfN2+: 00:35:57.453 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: 00:35:57.453 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:57.453 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:57.453 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjM0N2I0OTU3ZmY0OTA2Y2Q1ZjBjYzVkMGQ2MDY2YWZAfN2+: 00:35:57.453 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: ]] 00:35:57.453 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: 00:35:57.453 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:35:57.453 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.453 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:57.453 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:57.453 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:57.453 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.453 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:57.453 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.453 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.453 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.453 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.453 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:57.453 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:57.454 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:57.454 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.454 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.454 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:57.454 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:57.454 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:57.454 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:57.454 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:57.454 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:57.454 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.454 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.084 nvme0n1 00:35:58.084 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.084 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.084 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.084 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.084 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.084 03:46:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNThkM2M1NzNlZGJiODJkM2Y3NTY2Mjc1OTY5OTYyNTIwMjdjZTMwZmJhOWU33khPZQ==: 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNThkM2M1NzNlZGJiODJkM2Y3NTY2Mjc1OTY5OTYyNTIwMjdjZTMwZmJhOWU33khPZQ==: 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: ]] 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.084 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.343 nvme0n1 00:35:58.343 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.343 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.343 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.343 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.343 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.343 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.343 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.343 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.343 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.343 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.343 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.343 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.343 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:35:58.343 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.343 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:58.343 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:58.343 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:58.343 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWEyZTI5OTAwZjQ1ZmY5ZTg2YWQzZGRmOGQ1N2Y3NWRhMzdlYTEzYjhmNWMzMmRiYjZhNzcxZWUxMGY0NDUzMzF2Z9c=: 00:35:58.343 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:58.343 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:58.343 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:58.343 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWEyZTI5OTAwZjQ1ZmY5ZTg2YWQzZGRmOGQ1N2Y3NWRhMzdlYTEzYjhmNWMzMmRiYjZhNzcxZWUxMGY0NDUzMzF2Z9c=: 00:35:58.343 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:58.343 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:35:58.343 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.343 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:58.344 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:58.344 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:58.344 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.344 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:58.344 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.344 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.344 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.344 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.344 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:58.344 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:58.344 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:58.344 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.344 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.344 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:58.344 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:58.344 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:58.344 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:58.344 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:58.344 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:58.344 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.344 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.911 nvme0n1 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDM0MjMyMzY5YTgzOTg2YmRkODU2ODg5NDA3ODAwNDQ8LBGi: 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDM0MjMyMzY5YTgzOTg2YmRkODU2ODg5NDA3ODAwNDQ8LBGi: 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: ]] 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.911 03:46:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.479 nvme0n1 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRhYTE2ODhlODQ5NmEyZmU2ODhjNzliNGU5Mjk5NTExNzczYTJiOGExOWQzMTk5cXstAw==: 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRhYTE2ODhlODQ5NmEyZmU2ODhjNzliNGU5Mjk5NTExNzczYTJiOGExOWQzMTk5cXstAw==: 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: ]] 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.479 03:47:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.046 nvme0n1 00:36:00.046 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.046 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.046 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.046 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:00.046 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.046 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.046 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.046 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.046 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.046 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.046 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.046 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:00.046 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:36:00.046 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:00.046 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:00.046 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:00.304 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:00.304 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjM0N2I0OTU3ZmY0OTA2Y2Q1ZjBjYzVkMGQ2MDY2YWZAfN2+: 00:36:00.304 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: 00:36:00.304 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:00.304 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:00.304 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjM0N2I0OTU3ZmY0OTA2Y2Q1ZjBjYzVkMGQ2MDY2YWZAfN2+: 00:36:00.304 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: ]] 00:36:00.304 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: 00:36:00.304 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:36:00.304 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:00.304 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:00.304 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:00.304 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:00.304 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:00.304 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:00.304 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.304 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.304 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.304 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.304 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:00.304 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:00.304 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:00.304 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.304 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.304 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:00.304 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:00.304 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:00.304 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:00.304 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:00.304 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:00.304 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.304 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.875 nvme0n1 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNThkM2M1NzNlZGJiODJkM2Y3NTY2Mjc1OTY5OTYyNTIwMjdjZTMwZmJhOWU33khPZQ==: 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNThkM2M1NzNlZGJiODJkM2Y3NTY2Mjc1OTY5OTYyNTIwMjdjZTMwZmJhOWU33khPZQ==: 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: ]] 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.875 03:47:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.442 nvme0n1 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWEyZTI5OTAwZjQ1ZmY5ZTg2YWQzZGRmOGQ1N2Y3NWRhMzdlYTEzYjhmNWMzMmRiYjZhNzcxZWUxMGY0NDUzMzF2Z9c=: 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWEyZTI5OTAwZjQ1ZmY5ZTg2YWQzZGRmOGQ1N2Y3NWRhMzdlYTEzYjhmNWMzMmRiYjZhNzcxZWUxMGY0NDUzMzF2Z9c=: 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:01.442 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.443 03:47:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.010 nvme0n1 00:36:02.010 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.010 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:02.010 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.010 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:02.010 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.010 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.010 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:02.010 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:02.010 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.010 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.010 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.010 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:02.010 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:02.010 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDM0MjMyMzY5YTgzOTg2YmRkODU2ODg5NDA3ODAwNDQ8LBGi: 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDM0MjMyMzY5YTgzOTg2YmRkODU2ODg5NDA3ODAwNDQ8LBGi: 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: ]] 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.269 nvme0n1 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRhYTE2ODhlODQ5NmEyZmU2ODhjNzliNGU5Mjk5NTExNzczYTJiOGExOWQzMTk5cXstAw==: 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRhYTE2ODhlODQ5NmEyZmU2ODhjNzliNGU5Mjk5NTExNzczYTJiOGExOWQzMTk5cXstAw==: 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: ]] 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.269 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.528 nvme0n1 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjM0N2I0OTU3ZmY0OTA2Y2Q1ZjBjYzVkMGQ2MDY2YWZAfN2+: 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjM0N2I0OTU3ZmY0OTA2Y2Q1ZjBjYzVkMGQ2MDY2YWZAfN2+: 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: ]] 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.528 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.787 nvme0n1 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNThkM2M1NzNlZGJiODJkM2Y3NTY2Mjc1OTY5OTYyNTIwMjdjZTMwZmJhOWU33khPZQ==: 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNThkM2M1NzNlZGJiODJkM2Y3NTY2Mjc1OTY5OTYyNTIwMjdjZTMwZmJhOWU33khPZQ==: 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: ]] 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.787 03:47:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.046 nvme0n1 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWEyZTI5OTAwZjQ1ZmY5ZTg2YWQzZGRmOGQ1N2Y3NWRhMzdlYTEzYjhmNWMzMmRiYjZhNzcxZWUxMGY0NDUzMzF2Z9c=: 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWEyZTI5OTAwZjQ1ZmY5ZTg2YWQzZGRmOGQ1N2Y3NWRhMzdlYTEzYjhmNWMzMmRiYjZhNzcxZWUxMGY0NDUzMzF2Z9c=: 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.046 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.305 nvme0n1 00:36:03.305 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.305 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:03.305 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:03.305 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.305 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.305 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.305 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:03.305 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:03.305 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.305 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.305 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.305 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:03.305 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:03.305 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:36:03.305 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:03.305 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:03.305 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:03.305 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:03.305 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDM0MjMyMzY5YTgzOTg2YmRkODU2ODg5NDA3ODAwNDQ8LBGi: 00:36:03.305 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: 00:36:03.305 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:03.305 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:03.305 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDM0MjMyMzY5YTgzOTg2YmRkODU2ODg5NDA3ODAwNDQ8LBGi: 00:36:03.305 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: ]] 00:36:03.305 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: 00:36:03.305 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:36:03.305 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:03.305 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:03.305 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:03.305 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:03.305 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:03.305 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:03.305 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.305 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.305 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.305 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:03.306 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:03.306 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:03.306 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:03.306 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:03.306 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:03.306 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:03.306 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:03.306 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:03.306 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:03.306 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:03.306 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:03.306 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.306 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.564 nvme0n1 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRhYTE2ODhlODQ5NmEyZmU2ODhjNzliNGU5Mjk5NTExNzczYTJiOGExOWQzMTk5cXstAw==: 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRhYTE2ODhlODQ5NmEyZmU2ODhjNzliNGU5Mjk5NTExNzczYTJiOGExOWQzMTk5cXstAw==: 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: ]] 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.564 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.823 nvme0n1 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjM0N2I0OTU3ZmY0OTA2Y2Q1ZjBjYzVkMGQ2MDY2YWZAfN2+: 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjM0N2I0OTU3ZmY0OTA2Y2Q1ZjBjYzVkMGQ2MDY2YWZAfN2+: 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: ]] 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:03.823 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:03.824 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.824 03:47:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.081 nvme0n1 00:36:04.081 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.081 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:04.081 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:04.081 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.081 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.081 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.081 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:04.081 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNThkM2M1NzNlZGJiODJkM2Y3NTY2Mjc1OTY5OTYyNTIwMjdjZTMwZmJhOWU33khPZQ==: 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNThkM2M1NzNlZGJiODJkM2Y3NTY2Mjc1OTY5OTYyNTIwMjdjZTMwZmJhOWU33khPZQ==: 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: ]] 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.082 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.340 nvme0n1 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWEyZTI5OTAwZjQ1ZmY5ZTg2YWQzZGRmOGQ1N2Y3NWRhMzdlYTEzYjhmNWMzMmRiYjZhNzcxZWUxMGY0NDUzMzF2Z9c=: 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWEyZTI5OTAwZjQ1ZmY5ZTg2YWQzZGRmOGQ1N2Y3NWRhMzdlYTEzYjhmNWMzMmRiYjZhNzcxZWUxMGY0NDUzMzF2Z9c=: 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.340 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.598 nvme0n1 00:36:04.598 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.598 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:04.598 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:04.598 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.598 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.598 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.598 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:04.598 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:04.598 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.598 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.598 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.598 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:04.598 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:04.598 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:36:04.598 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:04.598 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:04.598 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:04.598 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:04.598 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDM0MjMyMzY5YTgzOTg2YmRkODU2ODg5NDA3ODAwNDQ8LBGi: 00:36:04.598 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: 00:36:04.598 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:04.598 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:04.598 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDM0MjMyMzY5YTgzOTg2YmRkODU2ODg5NDA3ODAwNDQ8LBGi: 00:36:04.598 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: ]] 00:36:04.598 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: 00:36:04.598 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:36:04.599 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:04.599 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:04.599 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:04.599 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:04.599 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:04.599 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:04.599 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.599 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.599 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.599 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:04.599 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:04.599 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:04.599 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:04.599 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:04.599 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:04.599 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:04.599 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:04.599 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:04.599 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:04.599 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:04.599 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:04.599 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.599 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.857 nvme0n1 00:36:04.857 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.857 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:04.857 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:04.857 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.857 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.857 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.857 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:04.857 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:04.857 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.857 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.857 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.857 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:04.857 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:36:04.857 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:04.857 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:04.857 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:04.857 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:04.857 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRhYTE2ODhlODQ5NmEyZmU2ODhjNzliNGU5Mjk5NTExNzczYTJiOGExOWQzMTk5cXstAw==: 00:36:04.857 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: 00:36:04.857 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:04.857 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:04.857 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRhYTE2ODhlODQ5NmEyZmU2ODhjNzliNGU5Mjk5NTExNzczYTJiOGExOWQzMTk5cXstAw==: 00:36:04.857 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: ]] 00:36:04.857 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: 00:36:04.857 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:36:04.857 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:04.857 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:04.857 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:04.857 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:04.857 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:04.857 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:04.857 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.857 03:47:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.857 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.857 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:04.857 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:04.857 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:04.857 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:04.857 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:04.857 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:04.857 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:04.857 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:04.857 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:04.857 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:04.857 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:04.857 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:04.857 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.857 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.116 nvme0n1 00:36:05.116 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.116 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:05.116 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.116 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.116 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.116 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.116 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:05.116 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:05.116 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.116 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.116 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.116 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:05.116 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:36:05.116 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:05.116 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:05.116 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:05.116 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:05.116 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjM0N2I0OTU3ZmY0OTA2Y2Q1ZjBjYzVkMGQ2MDY2YWZAfN2+: 00:36:05.116 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: 00:36:05.116 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:05.116 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:05.116 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjM0N2I0OTU3ZmY0OTA2Y2Q1ZjBjYzVkMGQ2MDY2YWZAfN2+: 00:36:05.116 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: ]] 00:36:05.116 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: 00:36:05.116 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:36:05.116 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:05.116 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:05.116 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:05.116 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:05.116 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:05.116 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:05.116 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.116 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.116 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.116 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:05.116 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:05.116 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:05.374 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:05.374 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.374 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.374 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:05.374 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:05.374 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:05.374 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:05.374 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:05.374 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:05.374 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.374 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.374 nvme0n1 00:36:05.374 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.374 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.374 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:05.374 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.374 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.632 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.632 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:05.632 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:05.632 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.632 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.632 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.632 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:05.632 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:36:05.632 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:05.632 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:05.632 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:05.632 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:05.632 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNThkM2M1NzNlZGJiODJkM2Y3NTY2Mjc1OTY5OTYyNTIwMjdjZTMwZmJhOWU33khPZQ==: 00:36:05.632 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: 00:36:05.632 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:05.633 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:05.633 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNThkM2M1NzNlZGJiODJkM2Y3NTY2Mjc1OTY5OTYyNTIwMjdjZTMwZmJhOWU33khPZQ==: 00:36:05.633 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: ]] 00:36:05.633 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: 00:36:05.633 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:36:05.633 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:05.633 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:05.633 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:05.633 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:05.633 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:05.633 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:05.633 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.633 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.633 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.633 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:05.633 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:05.633 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:05.633 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:05.633 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.633 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.633 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:05.633 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:05.633 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:05.633 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:05.633 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:05.633 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:05.633 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.633 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.892 nvme0n1 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWEyZTI5OTAwZjQ1ZmY5ZTg2YWQzZGRmOGQ1N2Y3NWRhMzdlYTEzYjhmNWMzMmRiYjZhNzcxZWUxMGY0NDUzMzF2Z9c=: 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWEyZTI5OTAwZjQ1ZmY5ZTg2YWQzZGRmOGQ1N2Y3NWRhMzdlYTEzYjhmNWMzMmRiYjZhNzcxZWUxMGY0NDUzMzF2Z9c=: 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.892 03:47:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.151 nvme0n1 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDM0MjMyMzY5YTgzOTg2YmRkODU2ODg5NDA3ODAwNDQ8LBGi: 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDM0MjMyMzY5YTgzOTg2YmRkODU2ODg5NDA3ODAwNDQ8LBGi: 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: ]] 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.151 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.717 nvme0n1 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRhYTE2ODhlODQ5NmEyZmU2ODhjNzliNGU5Mjk5NTExNzczYTJiOGExOWQzMTk5cXstAw==: 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRhYTE2ODhlODQ5NmEyZmU2ODhjNzliNGU5Mjk5NTExNzczYTJiOGExOWQzMTk5cXstAw==: 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: ]] 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.717 03:47:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.975 nvme0n1 00:36:06.975 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.975 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.975 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:06.975 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.975 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.975 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.975 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.975 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:06.975 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.975 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.234 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.234 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:07.234 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:36:07.234 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:07.234 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:07.234 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:07.234 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:07.234 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjM0N2I0OTU3ZmY0OTA2Y2Q1ZjBjYzVkMGQ2MDY2YWZAfN2+: 00:36:07.234 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: 00:36:07.234 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:07.234 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:07.234 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjM0N2I0OTU3ZmY0OTA2Y2Q1ZjBjYzVkMGQ2MDY2YWZAfN2+: 00:36:07.234 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: ]] 00:36:07.234 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: 00:36:07.234 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:36:07.234 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:07.234 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:07.234 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:07.234 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:07.234 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:07.234 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:07.234 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.234 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.234 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.234 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:07.234 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:07.234 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:07.234 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:07.234 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:07.234 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:07.234 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:07.234 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:07.235 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:07.235 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:07.235 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:07.235 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:07.235 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.235 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.493 nvme0n1 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNThkM2M1NzNlZGJiODJkM2Y3NTY2Mjc1OTY5OTYyNTIwMjdjZTMwZmJhOWU33khPZQ==: 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNThkM2M1NzNlZGJiODJkM2Y3NTY2Mjc1OTY5OTYyNTIwMjdjZTMwZmJhOWU33khPZQ==: 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: ]] 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:07.493 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:07.494 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:07.494 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:07.494 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.494 03:47:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.060 nvme0n1 00:36:08.060 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.060 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.060 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWEyZTI5OTAwZjQ1ZmY5ZTg2YWQzZGRmOGQ1N2Y3NWRhMzdlYTEzYjhmNWMzMmRiYjZhNzcxZWUxMGY0NDUzMzF2Z9c=: 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWEyZTI5OTAwZjQ1ZmY5ZTg2YWQzZGRmOGQ1N2Y3NWRhMzdlYTEzYjhmNWMzMmRiYjZhNzcxZWUxMGY0NDUzMzF2Z9c=: 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.061 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.319 nvme0n1 00:36:08.319 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.319 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.319 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:08.319 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.319 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.319 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.319 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:08.319 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:08.319 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.319 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDM0MjMyMzY5YTgzOTg2YmRkODU2ODg5NDA3ODAwNDQ8LBGi: 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDM0MjMyMzY5YTgzOTg2YmRkODU2ODg5NDA3ODAwNDQ8LBGi: 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: ]] 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.578 03:47:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.146 nvme0n1 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRhYTE2ODhlODQ5NmEyZmU2ODhjNzliNGU5Mjk5NTExNzczYTJiOGExOWQzMTk5cXstAw==: 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRhYTE2ODhlODQ5NmEyZmU2ODhjNzliNGU5Mjk5NTExNzczYTJiOGExOWQzMTk5cXstAw==: 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: ]] 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.146 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.713 nvme0n1 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjM0N2I0OTU3ZmY0OTA2Y2Q1ZjBjYzVkMGQ2MDY2YWZAfN2+: 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjM0N2I0OTU3ZmY0OTA2Y2Q1ZjBjYzVkMGQ2MDY2YWZAfN2+: 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: ]] 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:09.713 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:09.714 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:09.714 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:09.714 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.714 03:47:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.280 nvme0n1 00:36:10.280 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.280 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:10.280 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:10.280 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.280 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.280 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.280 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:10.280 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:10.280 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.280 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.280 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.280 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:10.280 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:36:10.280 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:10.280 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:10.280 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:10.280 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:10.280 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNThkM2M1NzNlZGJiODJkM2Y3NTY2Mjc1OTY5OTYyNTIwMjdjZTMwZmJhOWU33khPZQ==: 00:36:10.280 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: 00:36:10.280 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:10.539 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:10.539 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNThkM2M1NzNlZGJiODJkM2Y3NTY2Mjc1OTY5OTYyNTIwMjdjZTMwZmJhOWU33khPZQ==: 00:36:10.539 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: ]] 00:36:10.539 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: 00:36:10.539 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:36:10.539 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:10.539 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:10.539 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:10.539 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:10.539 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:10.539 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:10.539 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.539 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.539 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.539 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:10.539 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:10.539 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:10.539 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:10.539 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:10.539 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:10.539 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:10.539 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:10.539 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:10.539 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:10.539 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:10.539 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:10.539 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.539 03:47:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.106 nvme0n1 00:36:11.106 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.106 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:11.106 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:11.106 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.106 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.106 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.106 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:11.106 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:11.106 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.106 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.106 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.106 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:11.106 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:36:11.106 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:11.106 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:11.106 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:11.106 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:11.107 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWEyZTI5OTAwZjQ1ZmY5ZTg2YWQzZGRmOGQ1N2Y3NWRhMzdlYTEzYjhmNWMzMmRiYjZhNzcxZWUxMGY0NDUzMzF2Z9c=: 00:36:11.107 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:11.107 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:11.107 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:11.107 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWEyZTI5OTAwZjQ1ZmY5ZTg2YWQzZGRmOGQ1N2Y3NWRhMzdlYTEzYjhmNWMzMmRiYjZhNzcxZWUxMGY0NDUzMzF2Z9c=: 00:36:11.107 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:11.107 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:36:11.107 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:11.107 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:11.107 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:11.107 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:11.107 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:11.107 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:11.107 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.107 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.107 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.107 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:11.107 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:11.107 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:11.107 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:11.107 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:11.107 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:11.107 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:11.107 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:11.107 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:11.107 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:11.107 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:11.107 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:11.107 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.107 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.674 nvme0n1 00:36:11.674 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.674 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:11.674 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.674 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:11.674 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.674 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.674 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:11.674 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:11.674 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.674 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.674 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.674 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:11.674 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:11.674 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:11.674 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:36:11.674 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:11.674 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:11.675 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:11.675 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:11.675 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDM0MjMyMzY5YTgzOTg2YmRkODU2ODg5NDA3ODAwNDQ8LBGi: 00:36:11.675 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: 00:36:11.675 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:11.675 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:11.675 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDM0MjMyMzY5YTgzOTg2YmRkODU2ODg5NDA3ODAwNDQ8LBGi: 00:36:11.675 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: ]] 00:36:11.675 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: 00:36:11.675 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:36:11.675 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:11.675 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:11.675 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:11.675 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:11.675 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:11.675 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:11.675 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.675 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.675 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.675 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:11.675 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:11.675 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:11.675 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:11.675 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:11.675 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:11.675 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:11.675 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:11.675 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:11.675 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:11.675 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:11.675 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:11.675 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.675 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.934 nvme0n1 00:36:11.934 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.934 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:11.934 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:11.934 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.934 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.934 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.934 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:11.934 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:11.934 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.934 03:47:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.934 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.934 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:11.934 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:36:11.934 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:11.934 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:11.934 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:11.934 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:11.934 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRhYTE2ODhlODQ5NmEyZmU2ODhjNzliNGU5Mjk5NTExNzczYTJiOGExOWQzMTk5cXstAw==: 00:36:11.934 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: 00:36:11.934 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:11.934 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:11.934 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRhYTE2ODhlODQ5NmEyZmU2ODhjNzliNGU5Mjk5NTExNzczYTJiOGExOWQzMTk5cXstAw==: 00:36:11.934 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: ]] 00:36:11.934 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: 00:36:11.934 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:36:11.934 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:11.934 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:11.934 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:11.934 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:11.934 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:11.934 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:11.934 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.934 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.934 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.934 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:11.934 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:11.934 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:11.934 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:11.934 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:11.934 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:11.934 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:11.934 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:11.934 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:11.934 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:11.934 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:11.935 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:11.935 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.935 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.194 nvme0n1 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjM0N2I0OTU3ZmY0OTA2Y2Q1ZjBjYzVkMGQ2MDY2YWZAfN2+: 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjM0N2I0OTU3ZmY0OTA2Y2Q1ZjBjYzVkMGQ2MDY2YWZAfN2+: 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: ]] 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.194 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.454 nvme0n1 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNThkM2M1NzNlZGJiODJkM2Y3NTY2Mjc1OTY5OTYyNTIwMjdjZTMwZmJhOWU33khPZQ==: 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNThkM2M1NzNlZGJiODJkM2Y3NTY2Mjc1OTY5OTYyNTIwMjdjZTMwZmJhOWU33khPZQ==: 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: ]] 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.454 nvme0n1 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.454 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWEyZTI5OTAwZjQ1ZmY5ZTg2YWQzZGRmOGQ1N2Y3NWRhMzdlYTEzYjhmNWMzMmRiYjZhNzcxZWUxMGY0NDUzMzF2Z9c=: 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWEyZTI5OTAwZjQ1ZmY5ZTg2YWQzZGRmOGQ1N2Y3NWRhMzdlYTEzYjhmNWMzMmRiYjZhNzcxZWUxMGY0NDUzMzF2Z9c=: 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.714 nvme0n1 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.714 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDM0MjMyMzY5YTgzOTg2YmRkODU2ODg5NDA3ODAwNDQ8LBGi: 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDM0MjMyMzY5YTgzOTg2YmRkODU2ODg5NDA3ODAwNDQ8LBGi: 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: ]] 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.973 03:47:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.973 nvme0n1 00:36:12.973 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.973 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:12.973 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:12.973 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.973 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.973 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.973 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:12.973 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:12.973 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.973 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.973 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.973 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:12.973 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:36:12.973 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:12.973 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:12.974 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:12.974 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:13.232 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRhYTE2ODhlODQ5NmEyZmU2ODhjNzliNGU5Mjk5NTExNzczYTJiOGExOWQzMTk5cXstAw==: 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRhYTE2ODhlODQ5NmEyZmU2ODhjNzliNGU5Mjk5NTExNzczYTJiOGExOWQzMTk5cXstAw==: 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: ]] 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.233 nvme0n1 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjM0N2I0OTU3ZmY0OTA2Y2Q1ZjBjYzVkMGQ2MDY2YWZAfN2+: 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjM0N2I0OTU3ZmY0OTA2Y2Q1ZjBjYzVkMGQ2MDY2YWZAfN2+: 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: ]] 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.233 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.492 nvme0n1 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNThkM2M1NzNlZGJiODJkM2Y3NTY2Mjc1OTY5OTYyNTIwMjdjZTMwZmJhOWU33khPZQ==: 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNThkM2M1NzNlZGJiODJkM2Y3NTY2Mjc1OTY5OTYyNTIwMjdjZTMwZmJhOWU33khPZQ==: 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: ]] 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.492 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.751 nvme0n1 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWEyZTI5OTAwZjQ1ZmY5ZTg2YWQzZGRmOGQ1N2Y3NWRhMzdlYTEzYjhmNWMzMmRiYjZhNzcxZWUxMGY0NDUzMzF2Z9c=: 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWEyZTI5OTAwZjQ1ZmY5ZTg2YWQzZGRmOGQ1N2Y3NWRhMzdlYTEzYjhmNWMzMmRiYjZhNzcxZWUxMGY0NDUzMzF2Z9c=: 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:13.751 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:14.010 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:14.010 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:14.010 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:14.010 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.010 03:47:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.010 nvme0n1 00:36:14.010 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.010 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:14.010 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:14.010 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.010 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.010 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.010 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:14.010 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:14.010 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.010 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.010 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.010 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:14.010 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:14.010 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:36:14.010 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:14.010 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:14.010 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:14.010 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:14.010 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDM0MjMyMzY5YTgzOTg2YmRkODU2ODg5NDA3ODAwNDQ8LBGi: 00:36:14.010 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: 00:36:14.010 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:14.010 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:14.010 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDM0MjMyMzY5YTgzOTg2YmRkODU2ODg5NDA3ODAwNDQ8LBGi: 00:36:14.010 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: ]] 00:36:14.010 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: 00:36:14.011 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:36:14.011 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:14.011 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:14.011 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:14.011 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:14.011 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:14.011 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:14.011 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.011 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.011 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.011 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:14.011 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:14.011 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:14.011 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:14.011 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:14.011 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:14.011 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:14.011 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:14.011 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:14.011 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:14.011 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:14.011 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:14.011 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.011 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.268 nvme0n1 00:36:14.268 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.268 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:14.268 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:14.268 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.268 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRhYTE2ODhlODQ5NmEyZmU2ODhjNzliNGU5Mjk5NTExNzczYTJiOGExOWQzMTk5cXstAw==: 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRhYTE2ODhlODQ5NmEyZmU2ODhjNzliNGU5Mjk5NTExNzczYTJiOGExOWQzMTk5cXstAw==: 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: ]] 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:14.526 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.527 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.785 nvme0n1 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjM0N2I0OTU3ZmY0OTA2Y2Q1ZjBjYzVkMGQ2MDY2YWZAfN2+: 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjM0N2I0OTU3ZmY0OTA2Y2Q1ZjBjYzVkMGQ2MDY2YWZAfN2+: 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: ]] 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.785 03:47:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.044 nvme0n1 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNThkM2M1NzNlZGJiODJkM2Y3NTY2Mjc1OTY5OTYyNTIwMjdjZTMwZmJhOWU33khPZQ==: 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNThkM2M1NzNlZGJiODJkM2Y3NTY2Mjc1OTY5OTYyNTIwMjdjZTMwZmJhOWU33khPZQ==: 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: ]] 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.044 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.302 nvme0n1 00:36:15.302 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.302 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:15.302 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:15.302 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.302 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.302 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.302 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:15.302 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:15.302 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.302 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.302 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.303 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:15.303 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:36:15.303 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:15.303 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:15.303 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:15.303 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:15.303 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWEyZTI5OTAwZjQ1ZmY5ZTg2YWQzZGRmOGQ1N2Y3NWRhMzdlYTEzYjhmNWMzMmRiYjZhNzcxZWUxMGY0NDUzMzF2Z9c=: 00:36:15.303 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:15.303 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:15.303 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:15.303 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWEyZTI5OTAwZjQ1ZmY5ZTg2YWQzZGRmOGQ1N2Y3NWRhMzdlYTEzYjhmNWMzMmRiYjZhNzcxZWUxMGY0NDUzMzF2Z9c=: 00:36:15.303 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:15.303 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:36:15.303 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:15.303 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:15.303 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:15.303 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:15.303 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:15.303 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:15.303 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.303 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.303 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.303 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:15.303 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:15.303 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:15.303 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:15.303 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:15.303 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:15.303 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:15.303 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:15.303 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:15.303 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:15.303 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:15.303 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:15.303 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.303 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.561 nvme0n1 00:36:15.561 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.561 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:15.561 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:15.561 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.561 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.561 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.561 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:15.561 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:15.561 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.561 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.819 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.819 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:15.819 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:15.819 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:36:15.819 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:15.820 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:15.820 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:15.820 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:15.820 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDM0MjMyMzY5YTgzOTg2YmRkODU2ODg5NDA3ODAwNDQ8LBGi: 00:36:15.820 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: 00:36:15.820 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:15.820 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:15.820 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDM0MjMyMzY5YTgzOTg2YmRkODU2ODg5NDA3ODAwNDQ8LBGi: 00:36:15.820 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: ]] 00:36:15.820 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: 00:36:15.820 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:36:15.820 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:15.820 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:15.820 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:15.820 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:15.820 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:15.820 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:15.820 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.820 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.820 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.820 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:15.820 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:15.820 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:15.820 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:15.820 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:15.820 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:15.820 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:15.820 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:15.820 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:15.820 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:15.820 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:15.820 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:15.820 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.820 03:47:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.079 nvme0n1 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRhYTE2ODhlODQ5NmEyZmU2ODhjNzliNGU5Mjk5NTExNzczYTJiOGExOWQzMTk5cXstAw==: 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRhYTE2ODhlODQ5NmEyZmU2ODhjNzliNGU5Mjk5NTExNzczYTJiOGExOWQzMTk5cXstAw==: 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: ]] 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:16.079 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:16.080 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:16.080 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:16.080 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.080 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.647 nvme0n1 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjM0N2I0OTU3ZmY0OTA2Y2Q1ZjBjYzVkMGQ2MDY2YWZAfN2+: 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjM0N2I0OTU3ZmY0OTA2Y2Q1ZjBjYzVkMGQ2MDY2YWZAfN2+: 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: ]] 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.647 03:47:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.906 nvme0n1 00:36:16.906 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.906 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.906 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.906 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.906 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.906 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.906 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.906 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.906 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.906 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNThkM2M1NzNlZGJiODJkM2Y3NTY2Mjc1OTY5OTYyNTIwMjdjZTMwZmJhOWU33khPZQ==: 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNThkM2M1NzNlZGJiODJkM2Y3NTY2Mjc1OTY5OTYyNTIwMjdjZTMwZmJhOWU33khPZQ==: 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: ]] 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.166 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.425 nvme0n1 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWEyZTI5OTAwZjQ1ZmY5ZTg2YWQzZGRmOGQ1N2Y3NWRhMzdlYTEzYjhmNWMzMmRiYjZhNzcxZWUxMGY0NDUzMzF2Z9c=: 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWEyZTI5OTAwZjQ1ZmY5ZTg2YWQzZGRmOGQ1N2Y3NWRhMzdlYTEzYjhmNWMzMmRiYjZhNzcxZWUxMGY0NDUzMzF2Z9c=: 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:17.425 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.426 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.426 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:17.426 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:17.426 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:17.426 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:17.426 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:17.426 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:17.426 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.426 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.993 nvme0n1 00:36:17.993 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.993 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:17.993 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.993 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.993 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.993 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.993 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:17.993 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:17.993 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.993 03:47:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDM0MjMyMzY5YTgzOTg2YmRkODU2ODg5NDA3ODAwNDQ8LBGi: 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDM0MjMyMzY5YTgzOTg2YmRkODU2ODg5NDA3ODAwNDQ8LBGi: 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: ]] 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDZmNzdlZTYwYTFlODBkNzEyMjUxZmI5N2VlYmIzY2IyYmVlMTQ5ZDg0Njc3YmI3NWY2MWZjNTU1YTM5ZGI5ZoHWspY=: 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.993 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.561 nvme0n1 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRhYTE2ODhlODQ5NmEyZmU2ODhjNzliNGU5Mjk5NTExNzczYTJiOGExOWQzMTk5cXstAw==: 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRhYTE2ODhlODQ5NmEyZmU2ODhjNzliNGU5Mjk5NTExNzczYTJiOGExOWQzMTk5cXstAw==: 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: ]] 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.561 03:47:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.129 nvme0n1 00:36:19.129 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.129 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:19.129 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:19.129 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.129 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.129 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.129 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:19.129 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:19.129 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.129 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.129 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.129 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:19.129 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:36:19.129 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:19.129 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:19.129 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:19.129 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:19.129 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjM0N2I0OTU3ZmY0OTA2Y2Q1ZjBjYzVkMGQ2MDY2YWZAfN2+: 00:36:19.129 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: 00:36:19.129 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:19.129 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:19.129 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjM0N2I0OTU3ZmY0OTA2Y2Q1ZjBjYzVkMGQ2MDY2YWZAfN2+: 00:36:19.129 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: ]] 00:36:19.129 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: 00:36:19.129 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:36:19.129 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:19.129 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:19.129 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:19.129 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:19.129 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:19.129 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:19.129 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.129 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.387 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.387 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:19.387 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:19.387 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:19.387 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:19.387 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:19.387 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:19.387 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:19.387 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:19.387 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:19.387 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:19.387 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:19.387 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:19.387 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.387 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.956 nvme0n1 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzRiNThkM2M1NzNlZGJiODJkM2Y3NTY2Mjc1OTY5OTYyNTIwMjdjZTMwZmJhOWU33khPZQ==: 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzRiNThkM2M1NzNlZGJiODJkM2Y3NTY2Mjc1OTY5OTYyNTIwMjdjZTMwZmJhOWU33khPZQ==: 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: ]] 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmEzNzA2MzY2YTI2N2Y4MDFmNjZhZTQzOTVjZDQ3ZjcXCpcQ: 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.956 03:47:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.588 nvme0n1 00:36:20.588 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.588 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:20.588 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:20.588 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.588 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.588 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.588 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:20.588 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:20.588 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.588 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.588 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.588 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:20.588 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:36:20.588 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:20.588 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:20.588 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:20.588 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:20.588 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWEyZTI5OTAwZjQ1ZmY5ZTg2YWQzZGRmOGQ1N2Y3NWRhMzdlYTEzYjhmNWMzMmRiYjZhNzcxZWUxMGY0NDUzMzF2Z9c=: 00:36:20.588 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:20.588 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:20.588 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:20.588 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWEyZTI5OTAwZjQ1ZmY5ZTg2YWQzZGRmOGQ1N2Y3NWRhMzdlYTEzYjhmNWMzMmRiYjZhNzcxZWUxMGY0NDUzMzF2Z9c=: 00:36:20.588 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:20.588 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:36:20.588 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:20.588 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:20.588 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:20.588 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:20.588 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:20.589 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:20.589 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.589 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.589 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.589 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:20.589 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:20.589 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:20.589 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:20.589 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:20.589 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:20.589 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:20.589 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:20.589 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:20.589 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:20.589 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:20.589 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:20.589 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.589 03:47:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.183 nvme0n1 00:36:21.183 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.183 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.183 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:21.183 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.183 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.183 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRhYTE2ODhlODQ5NmEyZmU2ODhjNzliNGU5Mjk5NTExNzczYTJiOGExOWQzMTk5cXstAw==: 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRhYTE2ODhlODQ5NmEyZmU2ODhjNzliNGU5Mjk5NTExNzczYTJiOGExOWQzMTk5cXstAw==: 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: ]] 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.184 request: 00:36:21.184 { 00:36:21.184 "name": "nvme0", 00:36:21.184 "trtype": "tcp", 00:36:21.184 "traddr": "10.0.0.1", 00:36:21.184 "adrfam": "ipv4", 00:36:21.184 "trsvcid": "4420", 00:36:21.184 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:21.184 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:21.184 "prchk_reftag": false, 00:36:21.184 "prchk_guard": false, 00:36:21.184 "hdgst": false, 00:36:21.184 "ddgst": false, 00:36:21.184 "allow_unrecognized_csi": false, 00:36:21.184 "method": "bdev_nvme_attach_controller", 00:36:21.184 "req_id": 1 00:36:21.184 } 00:36:21.184 Got JSON-RPC error response 00:36:21.184 response: 00:36:21.184 { 00:36:21.184 "code": -5, 00:36:21.184 "message": "Input/output error" 00:36:21.184 } 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.184 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.443 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:36:21.443 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:36:21.443 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:21.443 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:21.443 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:21.443 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:21.443 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:21.443 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:21.443 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:21.443 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:21.443 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:21.443 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:21.443 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:21.443 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:21.443 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:21.443 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:21.443 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:21.443 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:21.443 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:21.443 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:21.443 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.443 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.443 request: 00:36:21.443 { 00:36:21.443 "name": "nvme0", 00:36:21.443 "trtype": "tcp", 00:36:21.443 "traddr": "10.0.0.1", 00:36:21.443 "adrfam": "ipv4", 00:36:21.443 "trsvcid": "4420", 00:36:21.443 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:21.443 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:21.443 "prchk_reftag": false, 00:36:21.443 "prchk_guard": false, 00:36:21.443 "hdgst": false, 00:36:21.443 "ddgst": false, 00:36:21.443 "dhchap_key": "key2", 00:36:21.443 "allow_unrecognized_csi": false, 00:36:21.443 "method": "bdev_nvme_attach_controller", 00:36:21.443 "req_id": 1 00:36:21.443 } 00:36:21.443 Got JSON-RPC error response 00:36:21.443 response: 00:36:21.443 { 00:36:21.443 "code": -5, 00:36:21.443 "message": "Input/output error" 00:36:21.443 } 00:36:21.443 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:21.443 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:21.443 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:21.443 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:21.443 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:21.443 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.443 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.444 request: 00:36:21.444 { 00:36:21.444 "name": "nvme0", 00:36:21.444 "trtype": "tcp", 00:36:21.444 "traddr": "10.0.0.1", 00:36:21.444 "adrfam": "ipv4", 00:36:21.444 "trsvcid": "4420", 00:36:21.444 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:21.444 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:21.444 "prchk_reftag": false, 00:36:21.444 "prchk_guard": false, 00:36:21.444 "hdgst": false, 00:36:21.444 "ddgst": false, 00:36:21.444 "dhchap_key": "key1", 00:36:21.444 "dhchap_ctrlr_key": "ckey2", 00:36:21.444 "allow_unrecognized_csi": false, 00:36:21.444 "method": "bdev_nvme_attach_controller", 00:36:21.444 "req_id": 1 00:36:21.444 } 00:36:21.444 Got JSON-RPC error response 00:36:21.444 response: 00:36:21.444 { 00:36:21.444 "code": -5, 00:36:21.444 "message": "Input/output error" 00:36:21.444 } 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.444 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.703 nvme0n1 00:36:21.703 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.703 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:21.703 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:21.703 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:21.703 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:21.703 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:21.703 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjM0N2I0OTU3ZmY0OTA2Y2Q1ZjBjYzVkMGQ2MDY2YWZAfN2+: 00:36:21.703 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: 00:36:21.703 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:21.703 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:21.703 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjM0N2I0OTU3ZmY0OTA2Y2Q1ZjBjYzVkMGQ2MDY2YWZAfN2+: 00:36:21.703 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: ]] 00:36:21.703 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: 00:36:21.703 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:21.703 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.703 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.703 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.703 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.703 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:36:21.703 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.703 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.703 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.703 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:21.703 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:21.703 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:21.703 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:21.703 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:21.703 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:21.703 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:21.703 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:21.703 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:21.703 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.703 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.962 request: 00:36:21.962 { 00:36:21.962 "name": "nvme0", 00:36:21.962 "dhchap_key": "key1", 00:36:21.962 "dhchap_ctrlr_key": "ckey2", 00:36:21.962 "method": "bdev_nvme_set_keys", 00:36:21.962 "req_id": 1 00:36:21.962 } 00:36:21.962 Got JSON-RPC error response 00:36:21.962 response: 00:36:21.962 { 00:36:21.962 "code": -13, 00:36:21.962 "message": "Permission denied" 00:36:21.962 } 00:36:21.962 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:21.962 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:21.962 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:21.962 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:21.962 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:21.962 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:21.962 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.962 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.962 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.962 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.962 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:21.962 03:47:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:22.896 03:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:22.896 03:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:22.896 03:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.896 03:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.896 03:47:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.896 03:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:22.896 03:47:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:23.831 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.831 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:23.831 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.831 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.831 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRhYTE2ODhlODQ5NmEyZmU2ODhjNzliNGU5Mjk5NTExNzczYTJiOGExOWQzMTk5cXstAw==: 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRhYTE2ODhlODQ5NmEyZmU2ODhjNzliNGU5Mjk5NTExNzczYTJiOGExOWQzMTk5cXstAw==: 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: ]] 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmZjY2RhMmJlNzk1YzczZjhhNDAwZGNlNmZmNDFhMGZlNjU3MTg4YjI2ZjM0OGZlARJdoA==: 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.090 nvme0n1 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjM0N2I0OTU3ZmY0OTA2Y2Q1ZjBjYzVkMGQ2MDY2YWZAfN2+: 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjM0N2I0OTU3ZmY0OTA2Y2Q1ZjBjYzVkMGQ2MDY2YWZAfN2+: 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: ]] 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWVlYmY5ODRkZjkzMTZmNDFkN2E4ZWI0OGYwN2U5OWTiJNv4: 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.090 request: 00:36:24.090 { 00:36:24.090 "name": "nvme0", 00:36:24.090 "dhchap_key": "key2", 00:36:24.090 "dhchap_ctrlr_key": "ckey1", 00:36:24.090 "method": "bdev_nvme_set_keys", 00:36:24.090 "req_id": 1 00:36:24.090 } 00:36:24.090 Got JSON-RPC error response 00:36:24.090 response: 00:36:24.090 { 00:36:24.090 "code": -13, 00:36:24.090 "message": "Permission denied" 00:36:24.090 } 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.090 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.348 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:24.348 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.348 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:36:24.348 03:47:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:36:25.293 03:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:25.293 03:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:25.293 03:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.293 03:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.293 03:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.293 03:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:36:25.293 03:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:36:25.293 03:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:36:25.293 03:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:36:25.293 03:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:25.293 03:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:36:25.293 03:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:25.293 03:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:36:25.293 03:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:25.293 03:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:25.293 rmmod nvme_tcp 00:36:25.293 rmmod nvme_fabrics 00:36:25.293 03:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:25.293 03:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:36:25.293 03:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:36:25.293 03:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2886539 ']' 00:36:25.293 03:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2886539 00:36:25.293 03:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2886539 ']' 00:36:25.293 03:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2886539 00:36:25.293 03:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:36:25.293 03:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:25.293 03:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2886539 00:36:25.293 03:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:25.293 03:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:25.293 03:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2886539' 00:36:25.293 killing process with pid 2886539 00:36:25.293 03:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2886539 00:36:25.293 03:47:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2886539 00:36:26.228 03:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:26.228 03:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:26.228 03:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:26.228 03:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:36:26.229 03:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:36:26.229 03:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:26.229 03:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:36:26.229 03:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:26.229 03:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:26.229 03:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:26.229 03:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:26.229 03:47:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:28.765 03:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:28.765 03:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:28.765 03:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:28.765 03:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:36:28.765 03:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:36:28.765 03:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:36:28.765 03:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:28.765 03:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:28.765 03:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:28.765 03:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:28.765 03:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:36:28.765 03:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:36:28.765 03:47:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:31.303 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:31.303 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:31.303 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:31.303 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:31.303 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:31.303 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:31.303 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:31.303 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:31.303 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:31.303 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:31.303 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:31.303 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:31.303 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:31.303 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:31.303 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:31.303 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:31.872 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:36:31.872 03:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.I6b /tmp/spdk.key-null.oYs /tmp/spdk.key-sha256.Ayk /tmp/spdk.key-sha384.I15 /tmp/spdk.key-sha512.MKA /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:36:31.872 03:47:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:34.409 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:36:34.409 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:34.409 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:36:34.409 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:36:34.409 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:36:34.409 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:36:34.409 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:36:34.409 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:36:34.409 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:36:34.409 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:36:34.409 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:36:34.409 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:36:34.409 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:36:34.409 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:36:34.409 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:36:34.409 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:36:34.409 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:36:34.409 00:36:34.409 real 0m53.959s 00:36:34.409 user 0m49.041s 00:36:34.409 sys 0m11.982s 00:36:34.409 03:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:34.409 03:47:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.409 ************************************ 00:36:34.409 END TEST nvmf_auth_host 00:36:34.409 ************************************ 00:36:34.409 03:47:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:36:34.410 03:47:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:34.410 03:47:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:34.410 03:47:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:34.410 03:47:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.410 ************************************ 00:36:34.410 START TEST nvmf_digest 00:36:34.410 ************************************ 00:36:34.410 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:34.670 * Looking for test storage... 00:36:34.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:34.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:34.670 --rc genhtml_branch_coverage=1 00:36:34.670 --rc genhtml_function_coverage=1 00:36:34.670 --rc genhtml_legend=1 00:36:34.670 --rc geninfo_all_blocks=1 00:36:34.670 --rc geninfo_unexecuted_blocks=1 00:36:34.670 00:36:34.670 ' 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:34.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:34.670 --rc genhtml_branch_coverage=1 00:36:34.670 --rc genhtml_function_coverage=1 00:36:34.670 --rc genhtml_legend=1 00:36:34.670 --rc geninfo_all_blocks=1 00:36:34.670 --rc geninfo_unexecuted_blocks=1 00:36:34.670 00:36:34.670 ' 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:34.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:34.670 --rc genhtml_branch_coverage=1 00:36:34.670 --rc genhtml_function_coverage=1 00:36:34.670 --rc genhtml_legend=1 00:36:34.670 --rc geninfo_all_blocks=1 00:36:34.670 --rc geninfo_unexecuted_blocks=1 00:36:34.670 00:36:34.670 ' 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:34.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:34.670 --rc genhtml_branch_coverage=1 00:36:34.670 --rc genhtml_function_coverage=1 00:36:34.670 --rc genhtml_legend=1 00:36:34.670 --rc geninfo_all_blocks=1 00:36:34.670 --rc geninfo_unexecuted_blocks=1 00:36:34.670 00:36:34.670 ' 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:34.670 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.671 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.671 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.671 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:36:34.671 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.671 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:36:34.671 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:34.671 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:34.671 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:34.671 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:34.671 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:34.671 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:34.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:34.671 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:34.671 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:34.671 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:34.671 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:36:34.671 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:36:34.671 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:36:34.671 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:36:34.671 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:36:34.671 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:34.671 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:34.671 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:34.671 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:34.671 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:34.671 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:34.671 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:34.671 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:34.671 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:34.671 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:34.671 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:36:34.671 03:47:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:39.945 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:39.945 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:39.945 Found net devices under 0000:af:00.0: cvl_0_0 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:39.945 Found net devices under 0000:af:00.1: cvl_0_1 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:39.945 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:40.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:40.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:36:40.205 00:36:40.205 --- 10.0.0.2 ping statistics --- 00:36:40.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:40.205 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:40.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:40.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:36:40.205 00:36:40.205 --- 10.0.0.1 ping statistics --- 00:36:40.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:40.205 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:40.205 ************************************ 00:36:40.205 START TEST nvmf_digest_clean 00:36:40.205 ************************************ 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2900238 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2900238 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2900238 ']' 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:40.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:40.205 03:47:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:40.464 [2024-12-13 03:47:41.460225] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:36:40.464 [2024-12-13 03:47:41.460320] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:40.464 [2024-12-13 03:47:41.579488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:40.722 [2024-12-13 03:47:41.689264] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:40.722 [2024-12-13 03:47:41.689304] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:40.722 [2024-12-13 03:47:41.689313] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:40.722 [2024-12-13 03:47:41.689339] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:40.722 [2024-12-13 03:47:41.689347] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:40.722 [2024-12-13 03:47:41.690568] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:36:41.290 03:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:41.290 03:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:41.290 03:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:41.290 03:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:41.290 03:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:41.290 03:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:41.290 03:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:36:41.290 03:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:36:41.290 03:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:36:41.290 03:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.290 03:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:41.549 null0 00:36:41.549 [2024-12-13 03:47:42.643666] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:41.549 [2024-12-13 03:47:42.667898] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:41.549 03:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.549 03:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:36:41.549 03:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:41.549 03:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:41.549 03:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:41.549 03:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:41.549 03:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:41.549 03:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:41.549 03:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2900479 00:36:41.549 03:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2900479 /var/tmp/bperf.sock 00:36:41.549 03:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:41.549 03:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2900479 ']' 00:36:41.549 03:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:41.549 03:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:41.549 03:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:41.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:41.549 03:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:41.549 03:47:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:41.549 [2024-12-13 03:47:42.747468] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:36:41.549 [2024-12-13 03:47:42.747551] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2900479 ] 00:36:41.808 [2024-12-13 03:47:42.857984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:41.808 [2024-12-13 03:47:42.967434] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:42.375 03:47:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:42.375 03:47:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:42.375 03:47:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:42.375 03:47:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:42.375 03:47:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:42.943 03:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:42.943 03:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:43.201 nvme0n1 00:36:43.201 03:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:43.201 03:47:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:43.460 Running I/O for 2 seconds... 00:36:45.327 22126.00 IOPS, 86.43 MiB/s [2024-12-13T02:47:46.536Z] 21878.00 IOPS, 85.46 MiB/s 00:36:45.328 Latency(us) 00:36:45.328 [2024-12-13T02:47:46.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:45.328 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:45.328 nvme0n1 : 2.01 21899.52 85.54 0.00 0.00 5836.93 3011.54 18599.74 00:36:45.328 [2024-12-13T02:47:46.537Z] =================================================================================================================== 00:36:45.328 [2024-12-13T02:47:46.537Z] Total : 21899.52 85.54 0.00 0.00 5836.93 3011.54 18599.74 00:36:45.328 { 00:36:45.328 "results": [ 00:36:45.328 { 00:36:45.328 "job": "nvme0n1", 00:36:45.328 "core_mask": "0x2", 00:36:45.328 "workload": "randread", 00:36:45.328 "status": "finished", 00:36:45.328 "queue_depth": 128, 00:36:45.328 "io_size": 4096, 00:36:45.328 "runtime": 2.007944, 00:36:45.328 "iops": 21899.515125919846, 00:36:45.328 "mibps": 85.5449809606244, 00:36:45.328 "io_failed": 0, 00:36:45.328 "io_timeout": 0, 00:36:45.328 "avg_latency_us": 5836.927457606562, 00:36:45.328 "min_latency_us": 3011.535238095238, 00:36:45.328 "max_latency_us": 18599.74095238095 00:36:45.328 } 00:36:45.328 ], 00:36:45.328 "core_count": 1 00:36:45.328 } 00:36:45.328 03:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:45.328 03:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:45.328 03:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:45.328 03:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:45.328 | select(.opcode=="crc32c") 00:36:45.328 | "\(.module_name) \(.executed)"' 00:36:45.328 03:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:45.587 03:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:45.587 03:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:45.587 03:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:45.587 03:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:45.587 03:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2900479 00:36:45.587 03:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2900479 ']' 00:36:45.587 03:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2900479 00:36:45.587 03:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:45.587 03:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:45.587 03:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2900479 00:36:45.587 03:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:45.587 03:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:45.587 03:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2900479' 00:36:45.587 killing process with pid 2900479 00:36:45.587 03:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2900479 00:36:45.587 Received shutdown signal, test time was about 2.000000 seconds 00:36:45.587 00:36:45.587 Latency(us) 00:36:45.587 [2024-12-13T02:47:46.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:45.587 [2024-12-13T02:47:46.796Z] =================================================================================================================== 00:36:45.587 [2024-12-13T02:47:46.796Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:45.587 03:47:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2900479 00:36:46.523 03:47:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:36:46.523 03:47:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:46.524 03:47:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:46.524 03:47:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:46.524 03:47:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:46.524 03:47:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:46.524 03:47:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:46.524 03:47:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2901175 00:36:46.524 03:47:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2901175 /var/tmp/bperf.sock 00:36:46.524 03:47:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:46.524 03:47:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2901175 ']' 00:36:46.524 03:47:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:46.524 03:47:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:46.524 03:47:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:46.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:46.524 03:47:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:46.524 03:47:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:46.524 [2024-12-13 03:47:47.669264] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:36:46.524 [2024-12-13 03:47:47.669359] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2901175 ] 00:36:46.524 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:46.524 Zero copy mechanism will not be used. 00:36:46.782 [2024-12-13 03:47:47.782650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:46.782 [2024-12-13 03:47:47.892571] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:47.349 03:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:47.349 03:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:47.349 03:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:47.349 03:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:47.349 03:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:47.916 03:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:47.916 03:47:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:48.175 nvme0n1 00:36:48.175 03:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:48.175 03:47:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:48.433 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:48.433 Zero copy mechanism will not be used. 00:36:48.433 Running I/O for 2 seconds... 00:36:50.305 5396.00 IOPS, 674.50 MiB/s [2024-12-13T02:47:51.514Z] 5199.00 IOPS, 649.88 MiB/s 00:36:50.305 Latency(us) 00:36:50.305 [2024-12-13T02:47:51.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:50.305 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:50.305 nvme0n1 : 2.00 5197.88 649.74 0.00 0.00 3075.20 655.36 5773.41 00:36:50.305 [2024-12-13T02:47:51.514Z] =================================================================================================================== 00:36:50.305 [2024-12-13T02:47:51.514Z] Total : 5197.88 649.74 0.00 0.00 3075.20 655.36 5773.41 00:36:50.305 { 00:36:50.305 "results": [ 00:36:50.305 { 00:36:50.305 "job": "nvme0n1", 00:36:50.305 "core_mask": "0x2", 00:36:50.305 "workload": "randread", 00:36:50.305 "status": "finished", 00:36:50.305 "queue_depth": 16, 00:36:50.305 "io_size": 131072, 00:36:50.305 "runtime": 2.003509, 00:36:50.305 "iops": 5197.880318980349, 00:36:50.305 "mibps": 649.7350398725437, 00:36:50.305 "io_failed": 0, 00:36:50.305 "io_timeout": 0, 00:36:50.305 "avg_latency_us": 3075.1974773884976, 00:36:50.305 "min_latency_us": 655.36, 00:36:50.305 "max_latency_us": 5773.409523809524 00:36:50.305 } 00:36:50.305 ], 00:36:50.305 "core_count": 1 00:36:50.305 } 00:36:50.305 03:47:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:50.305 03:47:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:50.305 03:47:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:50.305 03:47:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:50.305 | select(.opcode=="crc32c") 00:36:50.305 | "\(.module_name) \(.executed)"' 00:36:50.305 03:47:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:50.564 03:47:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:50.564 03:47:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:50.564 03:47:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:50.564 03:47:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:50.564 03:47:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2901175 00:36:50.564 03:47:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2901175 ']' 00:36:50.564 03:47:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2901175 00:36:50.564 03:47:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:50.564 03:47:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:50.564 03:47:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2901175 00:36:50.564 03:47:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:50.564 03:47:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:50.564 03:47:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2901175' 00:36:50.564 killing process with pid 2901175 00:36:50.564 03:47:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2901175 00:36:50.564 Received shutdown signal, test time was about 2.000000 seconds 00:36:50.564 00:36:50.564 Latency(us) 00:36:50.564 [2024-12-13T02:47:51.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:50.564 [2024-12-13T02:47:51.773Z] =================================================================================================================== 00:36:50.564 [2024-12-13T02:47:51.773Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:50.564 03:47:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2901175 00:36:51.501 03:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:36:51.501 03:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:51.501 03:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:51.501 03:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:51.501 03:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:51.501 03:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:51.501 03:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:51.501 03:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2902064 00:36:51.501 03:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2902064 /var/tmp/bperf.sock 00:36:51.501 03:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:51.501 03:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2902064 ']' 00:36:51.501 03:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:51.501 03:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:51.501 03:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:51.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:51.501 03:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:51.501 03:47:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:51.501 [2024-12-13 03:47:52.672292] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:36:51.501 [2024-12-13 03:47:52.672382] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2902064 ] 00:36:51.760 [2024-12-13 03:47:52.785559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:51.760 [2024-12-13 03:47:52.888132] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:52.328 03:47:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:52.328 03:47:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:52.328 03:47:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:52.328 03:47:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:52.328 03:47:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:52.895 03:47:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:52.895 03:47:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:53.153 nvme0n1 00:36:53.153 03:47:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:53.153 03:47:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:53.410 Running I/O for 2 seconds... 00:36:55.279 24797.00 IOPS, 96.86 MiB/s [2024-12-13T02:47:56.488Z] 24883.50 IOPS, 97.20 MiB/s 00:36:55.279 Latency(us) 00:36:55.279 [2024-12-13T02:47:56.488Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:55.279 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:55.279 nvme0n1 : 2.01 24885.78 97.21 0.00 0.00 5136.33 2434.19 8862.96 00:36:55.279 [2024-12-13T02:47:56.488Z] =================================================================================================================== 00:36:55.279 [2024-12-13T02:47:56.488Z] Total : 24885.78 97.21 0.00 0.00 5136.33 2434.19 8862.96 00:36:55.279 { 00:36:55.279 "results": [ 00:36:55.279 { 00:36:55.279 "job": "nvme0n1", 00:36:55.279 "core_mask": "0x2", 00:36:55.279 "workload": "randwrite", 00:36:55.279 "status": "finished", 00:36:55.279 "queue_depth": 128, 00:36:55.279 "io_size": 4096, 00:36:55.279 "runtime": 2.007532, 00:36:55.279 "iops": 24885.780151947765, 00:36:55.279 "mibps": 97.21007871854596, 00:36:55.279 "io_failed": 0, 00:36:55.279 "io_timeout": 0, 00:36:55.279 "avg_latency_us": 5136.329084172832, 00:36:55.279 "min_latency_us": 2434.194285714286, 00:36:55.279 "max_latency_us": 8862.96380952381 00:36:55.279 } 00:36:55.279 ], 00:36:55.279 "core_count": 1 00:36:55.279 } 00:36:55.279 03:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:55.279 03:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:55.279 03:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:55.279 03:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:55.279 | select(.opcode=="crc32c") 00:36:55.279 | "\(.module_name) \(.executed)"' 00:36:55.279 03:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:55.538 03:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:55.538 03:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:55.538 03:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:55.538 03:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:55.538 03:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2902064 00:36:55.538 03:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2902064 ']' 00:36:55.538 03:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2902064 00:36:55.538 03:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:55.538 03:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:55.538 03:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2902064 00:36:55.538 03:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:55.538 03:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:55.538 03:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2902064' 00:36:55.538 killing process with pid 2902064 00:36:55.538 03:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2902064 00:36:55.538 Received shutdown signal, test time was about 2.000000 seconds 00:36:55.538 00:36:55.538 Latency(us) 00:36:55.538 [2024-12-13T02:47:56.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:55.538 [2024-12-13T02:47:56.747Z] =================================================================================================================== 00:36:55.538 [2024-12-13T02:47:56.747Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:55.538 03:47:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2902064 00:36:56.473 03:47:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:36:56.473 03:47:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:56.473 03:47:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:56.473 03:47:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:56.473 03:47:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:56.473 03:47:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:56.473 03:47:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:56.473 03:47:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2902754 00:36:56.473 03:47:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2902754 /var/tmp/bperf.sock 00:36:56.473 03:47:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:56.473 03:47:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2902754 ']' 00:36:56.473 03:47:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:56.473 03:47:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:56.473 03:47:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:56.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:56.473 03:47:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:56.473 03:47:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:56.473 [2024-12-13 03:47:57.612307] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:36:56.473 [2024-12-13 03:47:57.612397] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2902754 ] 00:36:56.473 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:56.473 Zero copy mechanism will not be used. 00:36:56.869 [2024-12-13 03:47:57.727239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:56.869 [2024-12-13 03:47:57.832130] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:57.462 03:47:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:57.462 03:47:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:57.462 03:47:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:57.462 03:47:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:57.462 03:47:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:57.721 03:47:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:57.721 03:47:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:57.979 nvme0n1 00:36:57.979 03:47:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:57.979 03:47:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:58.238 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:58.238 Zero copy mechanism will not be used. 00:36:58.238 Running I/O for 2 seconds... 00:37:00.110 5744.00 IOPS, 718.00 MiB/s [2024-12-13T02:48:01.319Z] 6272.50 IOPS, 784.06 MiB/s 00:37:00.110 Latency(us) 00:37:00.110 [2024-12-13T02:48:01.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:00.110 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:00.110 nvme0n1 : 2.00 6270.34 783.79 0.00 0.00 2546.85 2012.89 11172.33 00:37:00.110 [2024-12-13T02:48:01.319Z] =================================================================================================================== 00:37:00.110 [2024-12-13T02:48:01.319Z] Total : 6270.34 783.79 0.00 0.00 2546.85 2012.89 11172.33 00:37:00.110 { 00:37:00.110 "results": [ 00:37:00.110 { 00:37:00.110 "job": "nvme0n1", 00:37:00.110 "core_mask": "0x2", 00:37:00.110 "workload": "randwrite", 00:37:00.110 "status": "finished", 00:37:00.110 "queue_depth": 16, 00:37:00.110 "io_size": 131072, 00:37:00.110 "runtime": 2.003241, 00:37:00.110 "iops": 6270.338915786967, 00:37:00.110 "mibps": 783.7923644733709, 00:37:00.110 "io_failed": 0, 00:37:00.110 "io_timeout": 0, 00:37:00.110 "avg_latency_us": 2546.8514053703643, 00:37:00.110 "min_latency_us": 2012.8914285714286, 00:37:00.110 "max_latency_us": 11172.327619047619 00:37:00.110 } 00:37:00.110 ], 00:37:00.110 "core_count": 1 00:37:00.110 } 00:37:00.110 03:48:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:00.110 03:48:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:00.110 03:48:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:00.110 03:48:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:00.110 | select(.opcode=="crc32c") 00:37:00.110 | "\(.module_name) \(.executed)"' 00:37:00.110 03:48:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:00.369 03:48:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:00.369 03:48:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:00.369 03:48:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:00.369 03:48:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:00.369 03:48:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2902754 00:37:00.369 03:48:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2902754 ']' 00:37:00.369 03:48:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2902754 00:37:00.369 03:48:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:37:00.369 03:48:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:00.369 03:48:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2902754 00:37:00.369 03:48:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:00.369 03:48:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:00.369 03:48:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2902754' 00:37:00.369 killing process with pid 2902754 00:37:00.369 03:48:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2902754 00:37:00.369 Received shutdown signal, test time was about 2.000000 seconds 00:37:00.369 00:37:00.369 Latency(us) 00:37:00.369 [2024-12-13T02:48:01.578Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:00.369 [2024-12-13T02:48:01.578Z] =================================================================================================================== 00:37:00.369 [2024-12-13T02:48:01.578Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:00.369 03:48:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2902754 00:37:01.305 03:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2900238 00:37:01.305 03:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2900238 ']' 00:37:01.305 03:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2900238 00:37:01.305 03:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:37:01.305 03:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:01.305 03:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2900238 00:37:01.305 03:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:01.305 03:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:01.305 03:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2900238' 00:37:01.305 killing process with pid 2900238 00:37:01.305 03:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2900238 00:37:01.305 03:48:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2900238 00:37:02.681 00:37:02.681 real 0m22.247s 00:37:02.681 user 0m41.787s 00:37:02.681 sys 0m4.860s 00:37:02.681 03:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:02.681 03:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:02.681 ************************************ 00:37:02.681 END TEST nvmf_digest_clean 00:37:02.681 ************************************ 00:37:02.681 03:48:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:37:02.681 03:48:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:02.681 03:48:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:02.681 03:48:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:02.681 ************************************ 00:37:02.681 START TEST nvmf_digest_error 00:37:02.681 ************************************ 00:37:02.681 03:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:37:02.681 03:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:37:02.681 03:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:02.681 03:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:02.681 03:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:02.681 03:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2904016 00:37:02.681 03:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2904016 00:37:02.681 03:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:37:02.681 03:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2904016 ']' 00:37:02.681 03:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:02.681 03:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:02.681 03:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:02.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:02.681 03:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:02.681 03:48:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:02.681 [2024-12-13 03:48:03.762384] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:37:02.681 [2024-12-13 03:48:03.762473] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:02.681 [2024-12-13 03:48:03.879857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:02.940 [2024-12-13 03:48:03.983236] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:02.940 [2024-12-13 03:48:03.983280] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:02.940 [2024-12-13 03:48:03.983291] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:02.940 [2024-12-13 03:48:03.983317] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:02.940 [2024-12-13 03:48:03.983326] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:02.940 [2024-12-13 03:48:03.984815] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:37:03.508 03:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:03.508 03:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:03.508 03:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:03.508 03:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:03.508 03:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:03.508 03:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:03.508 03:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:37:03.508 03:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.508 03:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:03.508 [2024-12-13 03:48:04.603051] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:37:03.508 03:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.508 03:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:37:03.508 03:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:37:03.508 03:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.508 03:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:03.766 null0 00:37:03.766 [2024-12-13 03:48:04.952670] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:04.027 [2024-12-13 03:48:04.976935] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:04.027 03:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.027 03:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:37:04.027 03:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:04.027 03:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:37:04.027 03:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:37:04.027 03:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:37:04.027 03:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2904207 00:37:04.027 03:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2904207 /var/tmp/bperf.sock 00:37:04.027 03:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:37:04.027 03:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2904207 ']' 00:37:04.027 03:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:04.027 03:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:04.027 03:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:04.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:04.027 03:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:04.027 03:48:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:04.027 [2024-12-13 03:48:05.056267] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:37:04.027 [2024-12-13 03:48:05.056358] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2904207 ] 00:37:04.027 [2024-12-13 03:48:05.168655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:04.286 [2024-12-13 03:48:05.275913] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:04.852 03:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:04.852 03:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:04.852 03:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:04.852 03:48:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:05.111 03:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:05.111 03:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.111 03:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:05.111 03:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.111 03:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:05.111 03:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:05.370 nvme0n1 00:37:05.370 03:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:37:05.370 03:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.370 03:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:05.370 03:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.370 03:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:05.370 03:48:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:05.629 Running I/O for 2 seconds... 00:37:05.629 [2024-12-13 03:48:06.613844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.629 [2024-12-13 03:48:06.613888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.629 [2024-12-13 03:48:06.613928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.629 [2024-12-13 03:48:06.628223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.629 [2024-12-13 03:48:06.628257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.629 [2024-12-13 03:48:06.628272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.629 [2024-12-13 03:48:06.637549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.629 [2024-12-13 03:48:06.637577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.629 [2024-12-13 03:48:06.637590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.629 [2024-12-13 03:48:06.649197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.629 [2024-12-13 03:48:06.649226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.629 [2024-12-13 03:48:06.649239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.629 [2024-12-13 03:48:06.664008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.629 [2024-12-13 03:48:06.664038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.629 [2024-12-13 03:48:06.664051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.629 [2024-12-13 03:48:06.673474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.629 [2024-12-13 03:48:06.673503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.629 [2024-12-13 03:48:06.673516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.629 [2024-12-13 03:48:06.685072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.629 [2024-12-13 03:48:06.685101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.629 [2024-12-13 03:48:06.685114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.629 [2024-12-13 03:48:06.699058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.629 [2024-12-13 03:48:06.699087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.629 [2024-12-13 03:48:06.699100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.629 [2024-12-13 03:48:06.708993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.629 [2024-12-13 03:48:06.709020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.629 [2024-12-13 03:48:06.709033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.629 [2024-12-13 03:48:06.720081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.629 [2024-12-13 03:48:06.720109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.629 [2024-12-13 03:48:06.720121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.629 [2024-12-13 03:48:06.730877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.629 [2024-12-13 03:48:06.730904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.629 [2024-12-13 03:48:06.730948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.629 [2024-12-13 03:48:06.741324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.629 [2024-12-13 03:48:06.741352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.629 [2024-12-13 03:48:06.741364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.629 [2024-12-13 03:48:06.755102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.629 [2024-12-13 03:48:06.755131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.629 [2024-12-13 03:48:06.755143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.629 [2024-12-13 03:48:06.766163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.629 [2024-12-13 03:48:06.766190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.629 [2024-12-13 03:48:06.766203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.629 [2024-12-13 03:48:06.778624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.629 [2024-12-13 03:48:06.778651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.629 [2024-12-13 03:48:06.778664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.629 [2024-12-13 03:48:06.789611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.629 [2024-12-13 03:48:06.789650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.629 [2024-12-13 03:48:06.789662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.629 [2024-12-13 03:48:06.799610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.630 [2024-12-13 03:48:06.799637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.630 [2024-12-13 03:48:06.799649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.630 [2024-12-13 03:48:06.810152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.630 [2024-12-13 03:48:06.810179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.630 [2024-12-13 03:48:06.810191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.630 [2024-12-13 03:48:06.819981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.630 [2024-12-13 03:48:06.820007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.630 [2024-12-13 03:48:06.820019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.630 [2024-12-13 03:48:06.833545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.630 [2024-12-13 03:48:06.833572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.630 [2024-12-13 03:48:06.833584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.888 [2024-12-13 03:48:06.846464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.888 [2024-12-13 03:48:06.846490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.888 [2024-12-13 03:48:06.846503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.888 [2024-12-13 03:48:06.856271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.888 [2024-12-13 03:48:06.856298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.888 [2024-12-13 03:48:06.856310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.888 [2024-12-13 03:48:06.869249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.888 [2024-12-13 03:48:06.869275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.889 [2024-12-13 03:48:06.869287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.889 [2024-12-13 03:48:06.878904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.889 [2024-12-13 03:48:06.878937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.889 [2024-12-13 03:48:06.878950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.889 [2024-12-13 03:48:06.892142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.889 [2024-12-13 03:48:06.892169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.889 [2024-12-13 03:48:06.892182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.889 [2024-12-13 03:48:06.902453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.889 [2024-12-13 03:48:06.902480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.889 [2024-12-13 03:48:06.902492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.889 [2024-12-13 03:48:06.915250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.889 [2024-12-13 03:48:06.915278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.889 [2024-12-13 03:48:06.915290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.889 [2024-12-13 03:48:06.926405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.889 [2024-12-13 03:48:06.926433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.889 [2024-12-13 03:48:06.926445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.889 [2024-12-13 03:48:06.935910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.889 [2024-12-13 03:48:06.935943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.889 [2024-12-13 03:48:06.935956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.889 [2024-12-13 03:48:06.946916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.889 [2024-12-13 03:48:06.946949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.889 [2024-12-13 03:48:06.946961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.889 [2024-12-13 03:48:06.957220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.889 [2024-12-13 03:48:06.957247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.889 [2024-12-13 03:48:06.957260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.889 [2024-12-13 03:48:06.970431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.889 [2024-12-13 03:48:06.970458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.889 [2024-12-13 03:48:06.970470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.889 [2024-12-13 03:48:06.981855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.889 [2024-12-13 03:48:06.981885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.889 [2024-12-13 03:48:06.981898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.889 [2024-12-13 03:48:06.992707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.889 [2024-12-13 03:48:06.992734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.889 [2024-12-13 03:48:06.992746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.889 [2024-12-13 03:48:07.006568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.889 [2024-12-13 03:48:07.006596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.889 [2024-12-13 03:48:07.006609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.889 [2024-12-13 03:48:07.019249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.889 [2024-12-13 03:48:07.019275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.889 [2024-12-13 03:48:07.019288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.889 [2024-12-13 03:48:07.029544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.889 [2024-12-13 03:48:07.029570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.889 [2024-12-13 03:48:07.029583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.889 [2024-12-13 03:48:07.040577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.889 [2024-12-13 03:48:07.040604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.889 [2024-12-13 03:48:07.040616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.889 [2024-12-13 03:48:07.050633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.889 [2024-12-13 03:48:07.050659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.889 [2024-12-13 03:48:07.050672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.889 [2024-12-13 03:48:07.062492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.889 [2024-12-13 03:48:07.062528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.889 [2024-12-13 03:48:07.062541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.889 [2024-12-13 03:48:07.072174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.889 [2024-12-13 03:48:07.072200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.889 [2024-12-13 03:48:07.072212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.889 [2024-12-13 03:48:07.083805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.889 [2024-12-13 03:48:07.083833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.889 [2024-12-13 03:48:07.083845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.889 [2024-12-13 03:48:07.093517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:05.889 [2024-12-13 03:48:07.093544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.889 [2024-12-13 03:48:07.093556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.148 [2024-12-13 03:48:07.104727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.148 [2024-12-13 03:48:07.104753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.148 [2024-12-13 03:48:07.104766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.148 [2024-12-13 03:48:07.115475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.148 [2024-12-13 03:48:07.115502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.148 [2024-12-13 03:48:07.115514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.148 [2024-12-13 03:48:07.125864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.148 [2024-12-13 03:48:07.125890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.148 [2024-12-13 03:48:07.125902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.148 [2024-12-13 03:48:07.136532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.148 [2024-12-13 03:48:07.136559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.148 [2024-12-13 03:48:07.136572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.148 [2024-12-13 03:48:07.147570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.148 [2024-12-13 03:48:07.147598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.148 [2024-12-13 03:48:07.147611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.148 [2024-12-13 03:48:07.160308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.148 [2024-12-13 03:48:07.160336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.148 [2024-12-13 03:48:07.160348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.148 [2024-12-13 03:48:07.170022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.148 [2024-12-13 03:48:07.170048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.148 [2024-12-13 03:48:07.170065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.148 [2024-12-13 03:48:07.181180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.149 [2024-12-13 03:48:07.181207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.149 [2024-12-13 03:48:07.181220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.149 [2024-12-13 03:48:07.191991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.149 [2024-12-13 03:48:07.192019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.149 [2024-12-13 03:48:07.192031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.149 [2024-12-13 03:48:07.202311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.149 [2024-12-13 03:48:07.202338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.149 [2024-12-13 03:48:07.202351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.149 [2024-12-13 03:48:07.211959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.149 [2024-12-13 03:48:07.211994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.149 [2024-12-13 03:48:07.212006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.149 [2024-12-13 03:48:07.223199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.149 [2024-12-13 03:48:07.223226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.149 [2024-12-13 03:48:07.223238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.149 [2024-12-13 03:48:07.235202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.149 [2024-12-13 03:48:07.235228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.149 [2024-12-13 03:48:07.235241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.149 [2024-12-13 03:48:07.244931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.149 [2024-12-13 03:48:07.244958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.149 [2024-12-13 03:48:07.244971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.149 [2024-12-13 03:48:07.257265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.149 [2024-12-13 03:48:07.257293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.149 [2024-12-13 03:48:07.257306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.149 [2024-12-13 03:48:07.269720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.149 [2024-12-13 03:48:07.269748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.149 [2024-12-13 03:48:07.269760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.149 [2024-12-13 03:48:07.278966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.149 [2024-12-13 03:48:07.278992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.149 [2024-12-13 03:48:07.279004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.149 [2024-12-13 03:48:07.292643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.149 [2024-12-13 03:48:07.292670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.149 [2024-12-13 03:48:07.292682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.149 [2024-12-13 03:48:07.306216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.149 [2024-12-13 03:48:07.306244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.149 [2024-12-13 03:48:07.306257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.149 [2024-12-13 03:48:07.319500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.149 [2024-12-13 03:48:07.319528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.149 [2024-12-13 03:48:07.319540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.149 [2024-12-13 03:48:07.329575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.149 [2024-12-13 03:48:07.329601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.149 [2024-12-13 03:48:07.329613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.149 [2024-12-13 03:48:07.342598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.149 [2024-12-13 03:48:07.342625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.149 [2024-12-13 03:48:07.342637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.149 [2024-12-13 03:48:07.352248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.149 [2024-12-13 03:48:07.352275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.149 [2024-12-13 03:48:07.352288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.408 [2024-12-13 03:48:07.365835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.408 [2024-12-13 03:48:07.365880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.408 [2024-12-13 03:48:07.365897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.408 [2024-12-13 03:48:07.376264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.408 [2024-12-13 03:48:07.376292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.408 [2024-12-13 03:48:07.376304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.408 [2024-12-13 03:48:07.386303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.408 [2024-12-13 03:48:07.386331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.408 [2024-12-13 03:48:07.386344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.408 [2024-12-13 03:48:07.398074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.408 [2024-12-13 03:48:07.398102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.408 [2024-12-13 03:48:07.398115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.408 [2024-12-13 03:48:07.408276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.408 [2024-12-13 03:48:07.408304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.408 [2024-12-13 03:48:07.408316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.408 [2024-12-13 03:48:07.418866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.408 [2024-12-13 03:48:07.418894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.408 [2024-12-13 03:48:07.418906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.409 [2024-12-13 03:48:07.430122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.409 [2024-12-13 03:48:07.430149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.409 [2024-12-13 03:48:07.430161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.409 [2024-12-13 03:48:07.439732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.409 [2024-12-13 03:48:07.439759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.409 [2024-12-13 03:48:07.439771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.409 [2024-12-13 03:48:07.451248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.409 [2024-12-13 03:48:07.451274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.409 [2024-12-13 03:48:07.451287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.409 [2024-12-13 03:48:07.462369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.409 [2024-12-13 03:48:07.462395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.409 [2024-12-13 03:48:07.462408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.409 [2024-12-13 03:48:07.472866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.409 [2024-12-13 03:48:07.472893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.409 [2024-12-13 03:48:07.472906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.409 [2024-12-13 03:48:07.483775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.409 [2024-12-13 03:48:07.483802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.409 [2024-12-13 03:48:07.483814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.409 [2024-12-13 03:48:07.497809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.409 [2024-12-13 03:48:07.497836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.409 [2024-12-13 03:48:07.497849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.409 [2024-12-13 03:48:07.506981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.409 [2024-12-13 03:48:07.507008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.409 [2024-12-13 03:48:07.507020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.409 [2024-12-13 03:48:07.518725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.409 [2024-12-13 03:48:07.518752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.409 [2024-12-13 03:48:07.518765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.409 [2024-12-13 03:48:07.529577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.409 [2024-12-13 03:48:07.529605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.409 [2024-12-13 03:48:07.529618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.409 [2024-12-13 03:48:07.542191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.409 [2024-12-13 03:48:07.542217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.409 [2024-12-13 03:48:07.542229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.409 [2024-12-13 03:48:07.551804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.409 [2024-12-13 03:48:07.551832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.409 [2024-12-13 03:48:07.551848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.409 [2024-12-13 03:48:07.566037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.409 [2024-12-13 03:48:07.566067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.409 [2024-12-13 03:48:07.566079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.409 [2024-12-13 03:48:07.575689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.409 [2024-12-13 03:48:07.575715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.409 [2024-12-13 03:48:07.575727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.409 [2024-12-13 03:48:07.588845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.409 [2024-12-13 03:48:07.588873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.409 [2024-12-13 03:48:07.588887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.409 22422.00 IOPS, 87.59 MiB/s [2024-12-13T02:48:07.618Z] [2024-12-13 03:48:07.601437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.409 [2024-12-13 03:48:07.601465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.409 [2024-12-13 03:48:07.601477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.409 [2024-12-13 03:48:07.611062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.409 [2024-12-13 03:48:07.611090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.409 [2024-12-13 03:48:07.611103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.668 [2024-12-13 03:48:07.624286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.668 [2024-12-13 03:48:07.624313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.668 [2024-12-13 03:48:07.624328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.668 [2024-12-13 03:48:07.635111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.668 [2024-12-13 03:48:07.635140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.668 [2024-12-13 03:48:07.635153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.668 [2024-12-13 03:48:07.647535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.668 [2024-12-13 03:48:07.647563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.668 [2024-12-13 03:48:07.647576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.668 [2024-12-13 03:48:07.659523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.668 [2024-12-13 03:48:07.659551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.668 [2024-12-13 03:48:07.659564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.668 [2024-12-13 03:48:07.670113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.668 [2024-12-13 03:48:07.670141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.668 [2024-12-13 03:48:07.670153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.668 [2024-12-13 03:48:07.680527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.668 [2024-12-13 03:48:07.680555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.668 [2024-12-13 03:48:07.680568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.668 [2024-12-13 03:48:07.691110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.668 [2024-12-13 03:48:07.691138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.668 [2024-12-13 03:48:07.691151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.668 [2024-12-13 03:48:07.701421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.669 [2024-12-13 03:48:07.701448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.669 [2024-12-13 03:48:07.701461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.669 [2024-12-13 03:48:07.712730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.669 [2024-12-13 03:48:07.712757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.669 [2024-12-13 03:48:07.712769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.669 [2024-12-13 03:48:07.722853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.669 [2024-12-13 03:48:07.722880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.669 [2024-12-13 03:48:07.722893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.669 [2024-12-13 03:48:07.737194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.669 [2024-12-13 03:48:07.737223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.669 [2024-12-13 03:48:07.737236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.669 [2024-12-13 03:48:07.746528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.669 [2024-12-13 03:48:07.746555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.669 [2024-12-13 03:48:07.746571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.669 [2024-12-13 03:48:07.760398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.669 [2024-12-13 03:48:07.760426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.669 [2024-12-13 03:48:07.760438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.669 [2024-12-13 03:48:07.774215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.669 [2024-12-13 03:48:07.774244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.669 [2024-12-13 03:48:07.774257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.669 [2024-12-13 03:48:07.787283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.669 [2024-12-13 03:48:07.787310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.669 [2024-12-13 03:48:07.787323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.669 [2024-12-13 03:48:07.797211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.669 [2024-12-13 03:48:07.797239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.669 [2024-12-13 03:48:07.797253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.669 [2024-12-13 03:48:07.810903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.669 [2024-12-13 03:48:07.810938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.669 [2024-12-13 03:48:07.810950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.669 [2024-12-13 03:48:07.823957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.669 [2024-12-13 03:48:07.823985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.669 [2024-12-13 03:48:07.823998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.669 [2024-12-13 03:48:07.834315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.669 [2024-12-13 03:48:07.834344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.669 [2024-12-13 03:48:07.834356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.669 [2024-12-13 03:48:07.848748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.669 [2024-12-13 03:48:07.848776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.669 [2024-12-13 03:48:07.848789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.669 [2024-12-13 03:48:07.861802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.669 [2024-12-13 03:48:07.861829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.669 [2024-12-13 03:48:07.861841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.669 [2024-12-13 03:48:07.872583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.669 [2024-12-13 03:48:07.872610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.669 [2024-12-13 03:48:07.872623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.928 [2024-12-13 03:48:07.883860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.928 [2024-12-13 03:48:07.883887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.928 [2024-12-13 03:48:07.883899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.928 [2024-12-13 03:48:07.895493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.928 [2024-12-13 03:48:07.895521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.928 [2024-12-13 03:48:07.895534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.928 [2024-12-13 03:48:07.904994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.928 [2024-12-13 03:48:07.905022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.928 [2024-12-13 03:48:07.905035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.928 [2024-12-13 03:48:07.916645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.928 [2024-12-13 03:48:07.916673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.928 [2024-12-13 03:48:07.916686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.929 [2024-12-13 03:48:07.926809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.929 [2024-12-13 03:48:07.926836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.929 [2024-12-13 03:48:07.926848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.929 [2024-12-13 03:48:07.938224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.929 [2024-12-13 03:48:07.938256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.929 [2024-12-13 03:48:07.938269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.929 [2024-12-13 03:48:07.948582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.929 [2024-12-13 03:48:07.948610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.929 [2024-12-13 03:48:07.948626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.929 [2024-12-13 03:48:07.959858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.929 [2024-12-13 03:48:07.959885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.929 [2024-12-13 03:48:07.959897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.929 [2024-12-13 03:48:07.971332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.929 [2024-12-13 03:48:07.971358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.929 [2024-12-13 03:48:07.971371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.929 [2024-12-13 03:48:07.980671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.929 [2024-12-13 03:48:07.980697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.929 [2024-12-13 03:48:07.980710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.929 [2024-12-13 03:48:07.991040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.929 [2024-12-13 03:48:07.991067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.929 [2024-12-13 03:48:07.991080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.929 [2024-12-13 03:48:08.003358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.929 [2024-12-13 03:48:08.003385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.929 [2024-12-13 03:48:08.003397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.929 [2024-12-13 03:48:08.013227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.929 [2024-12-13 03:48:08.013254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.929 [2024-12-13 03:48:08.013266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.929 [2024-12-13 03:48:08.025987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.929 [2024-12-13 03:48:08.026014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.929 [2024-12-13 03:48:08.026026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.929 [2024-12-13 03:48:08.035758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.929 [2024-12-13 03:48:08.035784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.929 [2024-12-13 03:48:08.035796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.929 [2024-12-13 03:48:08.050732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.929 [2024-12-13 03:48:08.050760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.929 [2024-12-13 03:48:08.050772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.929 [2024-12-13 03:48:08.065331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.929 [2024-12-13 03:48:08.065358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.929 [2024-12-13 03:48:08.065370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.929 [2024-12-13 03:48:08.079023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.929 [2024-12-13 03:48:08.079050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.929 [2024-12-13 03:48:08.079063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.929 [2024-12-13 03:48:08.093532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.929 [2024-12-13 03:48:08.093558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.929 [2024-12-13 03:48:08.093570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.929 [2024-12-13 03:48:08.103943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.929 [2024-12-13 03:48:08.103969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.929 [2024-12-13 03:48:08.103982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.929 [2024-12-13 03:48:08.118489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.929 [2024-12-13 03:48:08.118518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.929 [2024-12-13 03:48:08.118530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.929 [2024-12-13 03:48:08.130589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:06.929 [2024-12-13 03:48:08.130616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.929 [2024-12-13 03:48:08.130629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.188 [2024-12-13 03:48:08.140784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.188 [2024-12-13 03:48:08.140813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.189 [2024-12-13 03:48:08.140826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.189 [2024-12-13 03:48:08.155563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.189 [2024-12-13 03:48:08.155591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.189 [2024-12-13 03:48:08.155606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.189 [2024-12-13 03:48:08.168819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.189 [2024-12-13 03:48:08.168847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.189 [2024-12-13 03:48:08.168861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.189 [2024-12-13 03:48:08.179237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.189 [2024-12-13 03:48:08.179264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.189 [2024-12-13 03:48:08.179277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.189 [2024-12-13 03:48:08.189394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.189 [2024-12-13 03:48:08.189420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.189 [2024-12-13 03:48:08.189432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.189 [2024-12-13 03:48:08.203761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.189 [2024-12-13 03:48:08.203787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.189 [2024-12-13 03:48:08.203825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.189 [2024-12-13 03:48:08.215486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.189 [2024-12-13 03:48:08.215513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.189 [2024-12-13 03:48:08.215526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.189 [2024-12-13 03:48:08.226337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.189 [2024-12-13 03:48:08.226364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.189 [2024-12-13 03:48:08.226376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.189 [2024-12-13 03:48:08.237032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.189 [2024-12-13 03:48:08.237058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.189 [2024-12-13 03:48:08.237071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.189 [2024-12-13 03:48:08.247618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.189 [2024-12-13 03:48:08.247644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.189 [2024-12-13 03:48:08.247657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.189 [2024-12-13 03:48:08.258594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.189 [2024-12-13 03:48:08.258621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.189 [2024-12-13 03:48:08.258634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.189 [2024-12-13 03:48:08.270976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.189 [2024-12-13 03:48:08.271003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.189 [2024-12-13 03:48:08.271016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.189 [2024-12-13 03:48:08.281609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.189 [2024-12-13 03:48:08.281637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.189 [2024-12-13 03:48:08.281649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.189 [2024-12-13 03:48:08.296220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.189 [2024-12-13 03:48:08.296248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.189 [2024-12-13 03:48:08.296260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.189 [2024-12-13 03:48:08.308777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.189 [2024-12-13 03:48:08.308803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.189 [2024-12-13 03:48:08.308815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.189 [2024-12-13 03:48:08.318948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.189 [2024-12-13 03:48:08.318975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.189 [2024-12-13 03:48:08.318987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.189 [2024-12-13 03:48:08.332060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.189 [2024-12-13 03:48:08.332086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.189 [2024-12-13 03:48:08.332099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.189 [2024-12-13 03:48:08.342488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.189 [2024-12-13 03:48:08.342516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.189 [2024-12-13 03:48:08.342528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.189 [2024-12-13 03:48:08.352599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.189 [2024-12-13 03:48:08.352626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.189 [2024-12-13 03:48:08.352642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.189 [2024-12-13 03:48:08.363235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.189 [2024-12-13 03:48:08.363263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.189 [2024-12-13 03:48:08.363275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.189 [2024-12-13 03:48:08.377091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.189 [2024-12-13 03:48:08.377119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.189 [2024-12-13 03:48:08.377132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.189 [2024-12-13 03:48:08.387240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.189 [2024-12-13 03:48:08.387269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.189 [2024-12-13 03:48:08.387281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.448 [2024-12-13 03:48:08.402165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.448 [2024-12-13 03:48:08.402194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.448 [2024-12-13 03:48:08.402206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.448 [2024-12-13 03:48:08.415115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.448 [2024-12-13 03:48:08.415143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.448 [2024-12-13 03:48:08.415157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.448 [2024-12-13 03:48:08.425282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.448 [2024-12-13 03:48:08.425309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.448 [2024-12-13 03:48:08.425321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.449 [2024-12-13 03:48:08.439059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.449 [2024-12-13 03:48:08.439085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.449 [2024-12-13 03:48:08.439097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.449 [2024-12-13 03:48:08.449444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.449 [2024-12-13 03:48:08.449471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.449 [2024-12-13 03:48:08.449483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.449 [2024-12-13 03:48:08.462670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.449 [2024-12-13 03:48:08.462697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.449 [2024-12-13 03:48:08.462709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.449 [2024-12-13 03:48:08.472626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.449 [2024-12-13 03:48:08.472653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.449 [2024-12-13 03:48:08.472665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.449 [2024-12-13 03:48:08.486263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.449 [2024-12-13 03:48:08.486289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.449 [2024-12-13 03:48:08.486301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.449 [2024-12-13 03:48:08.496198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.449 [2024-12-13 03:48:08.496225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.449 [2024-12-13 03:48:08.496237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.449 [2024-12-13 03:48:08.511070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.449 [2024-12-13 03:48:08.511098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.449 [2024-12-13 03:48:08.511110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.449 [2024-12-13 03:48:08.524065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.449 [2024-12-13 03:48:08.524092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.449 [2024-12-13 03:48:08.524104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.449 [2024-12-13 03:48:08.533517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.449 [2024-12-13 03:48:08.533542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.449 [2024-12-13 03:48:08.533554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.449 [2024-12-13 03:48:08.547663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.449 [2024-12-13 03:48:08.547689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.449 [2024-12-13 03:48:08.547702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.449 [2024-12-13 03:48:08.561790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.449 [2024-12-13 03:48:08.561818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.449 [2024-12-13 03:48:08.561834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.449 [2024-12-13 03:48:08.574698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.449 [2024-12-13 03:48:08.574725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.449 [2024-12-13 03:48:08.574737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.449 [2024-12-13 03:48:08.584944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.449 [2024-12-13 03:48:08.584969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.449 [2024-12-13 03:48:08.584981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.449 [2024-12-13 03:48:08.599155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:07.449 [2024-12-13 03:48:08.599185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.449 [2024-12-13 03:48:08.599197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.449 21954.50 IOPS, 85.76 MiB/s 00:37:07.449 Latency(us) 00:37:07.449 [2024-12-13T02:48:08.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:07.449 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:07.449 nvme0n1 : 2.01 21959.68 85.78 0.00 0.00 5823.91 3183.18 18974.23 00:37:07.449 [2024-12-13T02:48:08.658Z] =================================================================================================================== 00:37:07.449 [2024-12-13T02:48:08.658Z] Total : 21959.68 85.78 0.00 0.00 5823.91 3183.18 18974.23 00:37:07.449 { 00:37:07.449 "results": [ 00:37:07.449 { 00:37:07.449 "job": "nvme0n1", 00:37:07.449 "core_mask": "0x2", 00:37:07.449 "workload": "randread", 00:37:07.449 "status": "finished", 00:37:07.449 "queue_depth": 128, 00:37:07.449 "io_size": 4096, 00:37:07.449 "runtime": 2.005357, 00:37:07.449 "iops": 21959.68099445635, 00:37:07.449 "mibps": 85.78000388459512, 00:37:07.449 "io_failed": 0, 00:37:07.449 "io_timeout": 0, 00:37:07.449 "avg_latency_us": 5823.905318060462, 00:37:07.449 "min_latency_us": 3183.177142857143, 00:37:07.449 "max_latency_us": 18974.23238095238 00:37:07.449 } 00:37:07.449 ], 00:37:07.449 "core_count": 1 00:37:07.449 } 00:37:07.449 03:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:07.449 03:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:07.449 03:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:07.449 | .driver_specific 00:37:07.449 | .nvme_error 00:37:07.449 | .status_code 00:37:07.449 | .command_transient_transport_error' 00:37:07.449 03:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:07.708 03:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 172 > 0 )) 00:37:07.708 03:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2904207 00:37:07.708 03:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2904207 ']' 00:37:07.708 03:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2904207 00:37:07.708 03:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:07.708 03:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:07.708 03:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2904207 00:37:07.708 03:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:07.708 03:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:07.708 03:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2904207' 00:37:07.708 killing process with pid 2904207 00:37:07.708 03:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2904207 00:37:07.708 Received shutdown signal, test time was about 2.000000 seconds 00:37:07.708 00:37:07.708 Latency(us) 00:37:07.708 [2024-12-13T02:48:08.917Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:07.708 [2024-12-13T02:48:08.917Z] =================================================================================================================== 00:37:07.708 [2024-12-13T02:48:08.917Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:07.708 03:48:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2904207 00:37:08.644 03:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:37:08.644 03:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:08.644 03:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:37:08.644 03:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:37:08.644 03:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:37:08.644 03:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2905224 00:37:08.644 03:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2905224 /var/tmp/bperf.sock 00:37:08.644 03:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:37:08.644 03:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2905224 ']' 00:37:08.644 03:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:08.644 03:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:08.644 03:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:08.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:08.644 03:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:08.644 03:48:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:08.902 [2024-12-13 03:48:09.873149] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:37:08.902 [2024-12-13 03:48:09.873237] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2905224 ] 00:37:08.902 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:08.902 Zero copy mechanism will not be used. 00:37:08.902 [2024-12-13 03:48:09.986487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:08.902 [2024-12-13 03:48:10.104313] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:09.838 03:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:09.838 03:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:09.838 03:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:09.839 03:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:09.839 03:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:09.839 03:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.839 03:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:09.839 03:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.839 03:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:09.839 03:48:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:10.102 nvme0n1 00:37:10.102 03:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:10.102 03:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.102 03:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:10.102 03:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.102 03:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:10.102 03:48:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:10.364 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:10.364 Zero copy mechanism will not be used. 00:37:10.364 Running I/O for 2 seconds... 00:37:10.364 [2024-12-13 03:48:11.402440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.364 [2024-12-13 03:48:11.402483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.364 [2024-12-13 03:48:11.402499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:10.364 [2024-12-13 03:48:11.408803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.364 [2024-12-13 03:48:11.408836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.364 [2024-12-13 03:48:11.408850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:10.364 [2024-12-13 03:48:11.415187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.364 [2024-12-13 03:48:11.415216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.364 [2024-12-13 03:48:11.415229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:10.364 [2024-12-13 03:48:11.421275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.364 [2024-12-13 03:48:11.421302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.364 [2024-12-13 03:48:11.421315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:10.364 [2024-12-13 03:48:11.427415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.364 [2024-12-13 03:48:11.427442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.364 [2024-12-13 03:48:11.427455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:10.364 [2024-12-13 03:48:11.433529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.364 [2024-12-13 03:48:11.433557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.364 [2024-12-13 03:48:11.433569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:10.364 [2024-12-13 03:48:11.439491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.364 [2024-12-13 03:48:11.439519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.364 [2024-12-13 03:48:11.439532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:10.364 [2024-12-13 03:48:11.445366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.364 [2024-12-13 03:48:11.445395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.365 [2024-12-13 03:48:11.445406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:10.365 [2024-12-13 03:48:11.451505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.365 [2024-12-13 03:48:11.451533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.365 [2024-12-13 03:48:11.451545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:10.365 [2024-12-13 03:48:11.457814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.365 [2024-12-13 03:48:11.457842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.365 [2024-12-13 03:48:11.457862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:10.365 [2024-12-13 03:48:11.464040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.365 [2024-12-13 03:48:11.464066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.365 [2024-12-13 03:48:11.464078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:10.365 [2024-12-13 03:48:11.469951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.365 [2024-12-13 03:48:11.469978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.365 [2024-12-13 03:48:11.469989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:10.365 [2024-12-13 03:48:11.475915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.365 [2024-12-13 03:48:11.475950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.365 [2024-12-13 03:48:11.475966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:10.365 [2024-12-13 03:48:11.482014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.365 [2024-12-13 03:48:11.482041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.365 [2024-12-13 03:48:11.482053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:10.365 [2024-12-13 03:48:11.488011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.365 [2024-12-13 03:48:11.488038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.365 [2024-12-13 03:48:11.488048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:10.365 [2024-12-13 03:48:11.493935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.365 [2024-12-13 03:48:11.493961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.365 [2024-12-13 03:48:11.493973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:10.365 [2024-12-13 03:48:11.500552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.365 [2024-12-13 03:48:11.500585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.365 [2024-12-13 03:48:11.500598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:10.365 [2024-12-13 03:48:11.508240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.365 [2024-12-13 03:48:11.508267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.365 [2024-12-13 03:48:11.508279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:10.365 [2024-12-13 03:48:11.515612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.365 [2024-12-13 03:48:11.515639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.365 [2024-12-13 03:48:11.515651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:10.365 [2024-12-13 03:48:11.523302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.365 [2024-12-13 03:48:11.523330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.365 [2024-12-13 03:48:11.523342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:10.365 [2024-12-13 03:48:11.531157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.365 [2024-12-13 03:48:11.531185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.365 [2024-12-13 03:48:11.531198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:10.365 [2024-12-13 03:48:11.538679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.365 [2024-12-13 03:48:11.538707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.365 [2024-12-13 03:48:11.538719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:10.365 [2024-12-13 03:48:11.543440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.365 [2024-12-13 03:48:11.543465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.365 [2024-12-13 03:48:11.543477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:10.365 [2024-12-13 03:48:11.548440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.365 [2024-12-13 03:48:11.548466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.365 [2024-12-13 03:48:11.548479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:10.365 [2024-12-13 03:48:11.554483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.365 [2024-12-13 03:48:11.554510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.365 [2024-12-13 03:48:11.554522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:10.365 [2024-12-13 03:48:11.561004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.365 [2024-12-13 03:48:11.561031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.365 [2024-12-13 03:48:11.561043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:10.365 [2024-12-13 03:48:11.569299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.365 [2024-12-13 03:48:11.569327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.365 [2024-12-13 03:48:11.569341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:10.625 [2024-12-13 03:48:11.577218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.625 [2024-12-13 03:48:11.577247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.625 [2024-12-13 03:48:11.577259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:10.625 [2024-12-13 03:48:11.584251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.625 [2024-12-13 03:48:11.584279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.625 [2024-12-13 03:48:11.584292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:10.625 [2024-12-13 03:48:11.591616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.625 [2024-12-13 03:48:11.591644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.625 [2024-12-13 03:48:11.591659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:10.625 [2024-12-13 03:48:11.598593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.625 [2024-12-13 03:48:11.598621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.625 [2024-12-13 03:48:11.598633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:10.625 [2024-12-13 03:48:11.604733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.625 [2024-12-13 03:48:11.604759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.625 [2024-12-13 03:48:11.604771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:10.625 [2024-12-13 03:48:11.610745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.625 [2024-12-13 03:48:11.610771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.625 [2024-12-13 03:48:11.610783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:10.625 [2024-12-13 03:48:11.616321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.625 [2024-12-13 03:48:11.616348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.625 [2024-12-13 03:48:11.616360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:10.625 [2024-12-13 03:48:11.622078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.625 [2024-12-13 03:48:11.622104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.625 [2024-12-13 03:48:11.622116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:10.625 [2024-12-13 03:48:11.628089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.625 [2024-12-13 03:48:11.628116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.625 [2024-12-13 03:48:11.628128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:10.625 [2024-12-13 03:48:11.634089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.625 [2024-12-13 03:48:11.634116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.625 [2024-12-13 03:48:11.634128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:10.625 [2024-12-13 03:48:11.640106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.625 [2024-12-13 03:48:11.640132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.625 [2024-12-13 03:48:11.640144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:10.625 [2024-12-13 03:48:11.646170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.625 [2024-12-13 03:48:11.646201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.625 [2024-12-13 03:48:11.646213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:10.625 [2024-12-13 03:48:11.652192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.625 [2024-12-13 03:48:11.652218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.625 [2024-12-13 03:48:11.652231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:10.625 [2024-12-13 03:48:11.658249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.625 [2024-12-13 03:48:11.658275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.625 [2024-12-13 03:48:11.658287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:10.625 [2024-12-13 03:48:11.664282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.625 [2024-12-13 03:48:11.664308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.625 [2024-12-13 03:48:11.664321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:10.625 [2024-12-13 03:48:11.670367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.625 [2024-12-13 03:48:11.670394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.625 [2024-12-13 03:48:11.670406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:10.625 [2024-12-13 03:48:11.676542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.625 [2024-12-13 03:48:11.676568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.625 [2024-12-13 03:48:11.676580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:10.625 [2024-12-13 03:48:11.682703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.625 [2024-12-13 03:48:11.682729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.625 [2024-12-13 03:48:11.682741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:10.625 [2024-12-13 03:48:11.688817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.625 [2024-12-13 03:48:11.688844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.625 [2024-12-13 03:48:11.688855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:10.625 [2024-12-13 03:48:11.694837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.625 [2024-12-13 03:48:11.694862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.625 [2024-12-13 03:48:11.694877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:10.625 [2024-12-13 03:48:11.700916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.625 [2024-12-13 03:48:11.700947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.625 [2024-12-13 03:48:11.700959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:10.625 [2024-12-13 03:48:11.706955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.625 [2024-12-13 03:48:11.706981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.625 [2024-12-13 03:48:11.706992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:10.625 [2024-12-13 03:48:11.713024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.625 [2024-12-13 03:48:11.713050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.625 [2024-12-13 03:48:11.713061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:10.625 [2024-12-13 03:48:11.718979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.626 [2024-12-13 03:48:11.719004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.626 [2024-12-13 03:48:11.719016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:10.626 [2024-12-13 03:48:11.725031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.626 [2024-12-13 03:48:11.725057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.626 [2024-12-13 03:48:11.725068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:10.626 [2024-12-13 03:48:11.731094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.626 [2024-12-13 03:48:11.731128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.626 [2024-12-13 03:48:11.731140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:10.626 [2024-12-13 03:48:11.737062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.626 [2024-12-13 03:48:11.737088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.626 [2024-12-13 03:48:11.737100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:10.626 [2024-12-13 03:48:11.743391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.626 [2024-12-13 03:48:11.743418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.626 [2024-12-13 03:48:11.743429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:10.626 [2024-12-13 03:48:11.749535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.626 [2024-12-13 03:48:11.749567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.626 [2024-12-13 03:48:11.749579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:10.626 [2024-12-13 03:48:11.755692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.626 [2024-12-13 03:48:11.755718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.626 [2024-12-13 03:48:11.755730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:10.626 [2024-12-13 03:48:11.761687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.626 [2024-12-13 03:48:11.761713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.626 [2024-12-13 03:48:11.761725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:10.626 [2024-12-13 03:48:11.767648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.626 [2024-12-13 03:48:11.767674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.626 [2024-12-13 03:48:11.767686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:10.626 [2024-12-13 03:48:11.773728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.626 [2024-12-13 03:48:11.773757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.626 [2024-12-13 03:48:11.773768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:10.626 [2024-12-13 03:48:11.779924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.626 [2024-12-13 03:48:11.779952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.626 [2024-12-13 03:48:11.779964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:10.626 [2024-12-13 03:48:11.786125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.626 [2024-12-13 03:48:11.786154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.626 [2024-12-13 03:48:11.786166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:10.626 [2024-12-13 03:48:11.792241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.626 [2024-12-13 03:48:11.792269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.626 [2024-12-13 03:48:11.792280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:10.626 [2024-12-13 03:48:11.798368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.626 [2024-12-13 03:48:11.798395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.626 [2024-12-13 03:48:11.798412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:10.626 [2024-12-13 03:48:11.804614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.626 [2024-12-13 03:48:11.804644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.626 [2024-12-13 03:48:11.804656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:10.626 [2024-12-13 03:48:11.810366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.626 [2024-12-13 03:48:11.810394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.626 [2024-12-13 03:48:11.810406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:10.626 [2024-12-13 03:48:11.816372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.626 [2024-12-13 03:48:11.816398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.626 [2024-12-13 03:48:11.816411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:10.626 [2024-12-13 03:48:11.822246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.626 [2024-12-13 03:48:11.822273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.626 [2024-12-13 03:48:11.822284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:10.626 [2024-12-13 03:48:11.828086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.626 [2024-12-13 03:48:11.828114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.626 [2024-12-13 03:48:11.828126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:10.886 [2024-12-13 03:48:11.833894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.886 [2024-12-13 03:48:11.833929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.886 [2024-12-13 03:48:11.833942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:10.886 [2024-12-13 03:48:11.839366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.886 [2024-12-13 03:48:11.839393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.886 [2024-12-13 03:48:11.839404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:10.886 [2024-12-13 03:48:11.843932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.886 [2024-12-13 03:48:11.843960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.886 [2024-12-13 03:48:11.843971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:10.886 [2024-12-13 03:48:11.851012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.886 [2024-12-13 03:48:11.851043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.886 [2024-12-13 03:48:11.851055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:10.887 [2024-12-13 03:48:11.857051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.887 [2024-12-13 03:48:11.857077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.887 [2024-12-13 03:48:11.857090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:10.887 [2024-12-13 03:48:11.863098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.887 [2024-12-13 03:48:11.863123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.887 [2024-12-13 03:48:11.863135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:10.887 [2024-12-13 03:48:11.869041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.887 [2024-12-13 03:48:11.869068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.887 [2024-12-13 03:48:11.869080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:10.887 [2024-12-13 03:48:11.875244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.887 [2024-12-13 03:48:11.875270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.887 [2024-12-13 03:48:11.875281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:10.887 [2024-12-13 03:48:11.881293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.887 [2024-12-13 03:48:11.881319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.887 [2024-12-13 03:48:11.881330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:10.887 [2024-12-13 03:48:11.887249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.887 [2024-12-13 03:48:11.887275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.887 [2024-12-13 03:48:11.887287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:10.887 [2024-12-13 03:48:11.893235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.887 [2024-12-13 03:48:11.893261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.887 [2024-12-13 03:48:11.893272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:10.887 [2024-12-13 03:48:11.899272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.887 [2024-12-13 03:48:11.899298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.887 [2024-12-13 03:48:11.899314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:10.887 [2024-12-13 03:48:11.905380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.887 [2024-12-13 03:48:11.905406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.887 [2024-12-13 03:48:11.905417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:10.887 [2024-12-13 03:48:11.911276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.887 [2024-12-13 03:48:11.911302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.887 [2024-12-13 03:48:11.911314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:10.887 [2024-12-13 03:48:11.917387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.887 [2024-12-13 03:48:11.917413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.887 [2024-12-13 03:48:11.917426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:10.887 [2024-12-13 03:48:11.923313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.887 [2024-12-13 03:48:11.923340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.887 [2024-12-13 03:48:11.923352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:10.887 [2024-12-13 03:48:11.929309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.887 [2024-12-13 03:48:11.929336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.887 [2024-12-13 03:48:11.929347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:10.887 [2024-12-13 03:48:11.935440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.887 [2024-12-13 03:48:11.935467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.887 [2024-12-13 03:48:11.935478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:10.887 [2024-12-13 03:48:11.941317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.887 [2024-12-13 03:48:11.941343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.887 [2024-12-13 03:48:11.941355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:10.887 [2024-12-13 03:48:11.947138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.887 [2024-12-13 03:48:11.947165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.887 [2024-12-13 03:48:11.947177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:10.887 [2024-12-13 03:48:11.953140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.887 [2024-12-13 03:48:11.953171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.887 [2024-12-13 03:48:11.953182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:10.887 [2024-12-13 03:48:11.959138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.887 [2024-12-13 03:48:11.959164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.887 [2024-12-13 03:48:11.959175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:10.887 [2024-12-13 03:48:11.964755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.887 [2024-12-13 03:48:11.964783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.887 [2024-12-13 03:48:11.964795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:10.887 [2024-12-13 03:48:11.970758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.887 [2024-12-13 03:48:11.970784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.887 [2024-12-13 03:48:11.970796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:10.887 [2024-12-13 03:48:11.976806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.887 [2024-12-13 03:48:11.976835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.887 [2024-12-13 03:48:11.976846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:10.887 [2024-12-13 03:48:11.982816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.887 [2024-12-13 03:48:11.982843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.887 [2024-12-13 03:48:11.982854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:10.887 [2024-12-13 03:48:11.988810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.887 [2024-12-13 03:48:11.988836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.887 [2024-12-13 03:48:11.988848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:10.887 [2024-12-13 03:48:11.994741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.887 [2024-12-13 03:48:11.994767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.887 [2024-12-13 03:48:11.994779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:10.887 [2024-12-13 03:48:12.000650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.887 [2024-12-13 03:48:12.000676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.887 [2024-12-13 03:48:12.000688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:10.887 [2024-12-13 03:48:12.006646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.887 [2024-12-13 03:48:12.006673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.887 [2024-12-13 03:48:12.006684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:10.887 [2024-12-13 03:48:12.012627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.887 [2024-12-13 03:48:12.012665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.888 [2024-12-13 03:48:12.012676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:10.888 [2024-12-13 03:48:12.018645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.888 [2024-12-13 03:48:12.018671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.888 [2024-12-13 03:48:12.018682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:10.888 [2024-12-13 03:48:12.024672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.888 [2024-12-13 03:48:12.024698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.888 [2024-12-13 03:48:12.024710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:10.888 [2024-12-13 03:48:12.030660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.888 [2024-12-13 03:48:12.030686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.888 [2024-12-13 03:48:12.030697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:10.888 [2024-12-13 03:48:12.036719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.888 [2024-12-13 03:48:12.036746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.888 [2024-12-13 03:48:12.036757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:10.888 [2024-12-13 03:48:12.042701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.888 [2024-12-13 03:48:12.042727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.888 [2024-12-13 03:48:12.042739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:10.888 [2024-12-13 03:48:12.048665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.888 [2024-12-13 03:48:12.048691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.888 [2024-12-13 03:48:12.048702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:10.888 [2024-12-13 03:48:12.054705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.888 [2024-12-13 03:48:12.054735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.888 [2024-12-13 03:48:12.054747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:10.888 [2024-12-13 03:48:12.060729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.888 [2024-12-13 03:48:12.060755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.888 [2024-12-13 03:48:12.060765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:10.888 [2024-12-13 03:48:12.066667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.888 [2024-12-13 03:48:12.066693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.888 [2024-12-13 03:48:12.066705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:10.888 [2024-12-13 03:48:12.072656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.888 [2024-12-13 03:48:12.072681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.888 [2024-12-13 03:48:12.072693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:10.888 [2024-12-13 03:48:12.078610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.888 [2024-12-13 03:48:12.078637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.888 [2024-12-13 03:48:12.078649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:10.888 [2024-12-13 03:48:12.084562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.888 [2024-12-13 03:48:12.084589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.888 [2024-12-13 03:48:12.084600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:10.888 [2024-12-13 03:48:12.090528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:10.888 [2024-12-13 03:48:12.090555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.888 [2024-12-13 03:48:12.090566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.148 [2024-12-13 03:48:12.096192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.148 [2024-12-13 03:48:12.096219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.148 [2024-12-13 03:48:12.096230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.148 [2024-12-13 03:48:12.102190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.148 [2024-12-13 03:48:12.102218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.148 [2024-12-13 03:48:12.102229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.148 [2024-12-13 03:48:12.108181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.148 [2024-12-13 03:48:12.108207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.148 [2024-12-13 03:48:12.108218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.148 [2024-12-13 03:48:12.114023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.148 [2024-12-13 03:48:12.114050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.148 [2024-12-13 03:48:12.114061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.148 [2024-12-13 03:48:12.117372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.148 [2024-12-13 03:48:12.117398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.148 [2024-12-13 03:48:12.117410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.148 [2024-12-13 03:48:12.123313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.149 [2024-12-13 03:48:12.123339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.149 [2024-12-13 03:48:12.123351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.149 [2024-12-13 03:48:12.129130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.149 [2024-12-13 03:48:12.129156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.149 [2024-12-13 03:48:12.129168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.149 [2024-12-13 03:48:12.135482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.149 [2024-12-13 03:48:12.135511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.149 [2024-12-13 03:48:12.135523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.149 [2024-12-13 03:48:12.141538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.149 [2024-12-13 03:48:12.141567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.149 [2024-12-13 03:48:12.141579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.149 [2024-12-13 03:48:12.147435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.149 [2024-12-13 03:48:12.147462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.149 [2024-12-13 03:48:12.147475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.149 [2024-12-13 03:48:12.153503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.149 [2024-12-13 03:48:12.153534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.149 [2024-12-13 03:48:12.153546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.149 [2024-12-13 03:48:12.159192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.149 [2024-12-13 03:48:12.159219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.149 [2024-12-13 03:48:12.159231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.149 [2024-12-13 03:48:12.165231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.149 [2024-12-13 03:48:12.165258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.149 [2024-12-13 03:48:12.165270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.149 [2024-12-13 03:48:12.171257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.149 [2024-12-13 03:48:12.171285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.149 [2024-12-13 03:48:12.171296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.149 [2024-12-13 03:48:12.177297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.149 [2024-12-13 03:48:12.177325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.149 [2024-12-13 03:48:12.177337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.149 [2024-12-13 03:48:12.183316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.149 [2024-12-13 03:48:12.183343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.149 [2024-12-13 03:48:12.183356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.149 [2024-12-13 03:48:12.189380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.149 [2024-12-13 03:48:12.189406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.149 [2024-12-13 03:48:12.189418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.149 [2024-12-13 03:48:12.195459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.149 [2024-12-13 03:48:12.195487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.149 [2024-12-13 03:48:12.195498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.149 [2024-12-13 03:48:12.201412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.149 [2024-12-13 03:48:12.201439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.149 [2024-12-13 03:48:12.201451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.149 [2024-12-13 03:48:12.207604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.149 [2024-12-13 03:48:12.207631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.149 [2024-12-13 03:48:12.207643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.149 [2024-12-13 03:48:12.213251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.149 [2024-12-13 03:48:12.213277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.149 [2024-12-13 03:48:12.213289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.149 [2024-12-13 03:48:12.219160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.149 [2024-12-13 03:48:12.219187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.149 [2024-12-13 03:48:12.219198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.149 [2024-12-13 03:48:12.225094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.149 [2024-12-13 03:48:12.225119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.149 [2024-12-13 03:48:12.225130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.149 [2024-12-13 03:48:12.228508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.149 [2024-12-13 03:48:12.228534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.149 [2024-12-13 03:48:12.228546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.149 [2024-12-13 03:48:12.234477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.149 [2024-12-13 03:48:12.234503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.149 [2024-12-13 03:48:12.234539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.149 [2024-12-13 03:48:12.240329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.149 [2024-12-13 03:48:12.240355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.149 [2024-12-13 03:48:12.240367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.149 [2024-12-13 03:48:12.246166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.149 [2024-12-13 03:48:12.246192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.149 [2024-12-13 03:48:12.246203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.149 [2024-12-13 03:48:12.252086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.149 [2024-12-13 03:48:12.252112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.149 [2024-12-13 03:48:12.252128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.149 [2024-12-13 03:48:12.258145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.149 [2024-12-13 03:48:12.258175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.149 [2024-12-13 03:48:12.258186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.149 [2024-12-13 03:48:12.264063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.149 [2024-12-13 03:48:12.264089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.149 [2024-12-13 03:48:12.264100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.149 [2024-12-13 03:48:12.270118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.150 [2024-12-13 03:48:12.270143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.150 [2024-12-13 03:48:12.270155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.150 [2024-12-13 03:48:12.276022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.150 [2024-12-13 03:48:12.276048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.150 [2024-12-13 03:48:12.276060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.150 [2024-12-13 03:48:12.281945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.150 [2024-12-13 03:48:12.281972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.150 [2024-12-13 03:48:12.281983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.150 [2024-12-13 03:48:12.287964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.150 [2024-12-13 03:48:12.287989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.150 [2024-12-13 03:48:12.288000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.150 [2024-12-13 03:48:12.293851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.150 [2024-12-13 03:48:12.293877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.150 [2024-12-13 03:48:12.293889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.150 [2024-12-13 03:48:12.300053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.150 [2024-12-13 03:48:12.300078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.150 [2024-12-13 03:48:12.300089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.150 [2024-12-13 03:48:12.306134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.150 [2024-12-13 03:48:12.306159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.150 [2024-12-13 03:48:12.306170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.150 [2024-12-13 03:48:12.312163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.150 [2024-12-13 03:48:12.312189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.150 [2024-12-13 03:48:12.312200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.150 [2024-12-13 03:48:12.318303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.150 [2024-12-13 03:48:12.318329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.150 [2024-12-13 03:48:12.318340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.150 [2024-12-13 03:48:12.324341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.150 [2024-12-13 03:48:12.324367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.150 [2024-12-13 03:48:12.324378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.150 [2024-12-13 03:48:12.330237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.150 [2024-12-13 03:48:12.330263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.150 [2024-12-13 03:48:12.330274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.150 [2024-12-13 03:48:12.336270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.150 [2024-12-13 03:48:12.336297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.150 [2024-12-13 03:48:12.336308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.150 [2024-12-13 03:48:12.342672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.150 [2024-12-13 03:48:12.342699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.150 [2024-12-13 03:48:12.342711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.150 [2024-12-13 03:48:12.348881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.150 [2024-12-13 03:48:12.348906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.150 [2024-12-13 03:48:12.348924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.150 [2024-12-13 03:48:12.354893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.150 [2024-12-13 03:48:12.354935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.150 [2024-12-13 03:48:12.354968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.410 [2024-12-13 03:48:12.361069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.410 [2024-12-13 03:48:12.361095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.410 [2024-12-13 03:48:12.361107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.410 [2024-12-13 03:48:12.367116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.410 [2024-12-13 03:48:12.367142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.410 [2024-12-13 03:48:12.367153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.410 [2024-12-13 03:48:12.373149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.410 [2024-12-13 03:48:12.373175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.410 [2024-12-13 03:48:12.373187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.410 [2024-12-13 03:48:12.379220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.410 [2024-12-13 03:48:12.379246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.410 [2024-12-13 03:48:12.379258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.410 [2024-12-13 03:48:12.385164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.410 [2024-12-13 03:48:12.385190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.410 [2024-12-13 03:48:12.385202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.410 [2024-12-13 03:48:12.391245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.410 [2024-12-13 03:48:12.391271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.410 [2024-12-13 03:48:12.391283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.410 5073.00 IOPS, 634.12 MiB/s [2024-12-13T02:48:12.619Z] [2024-12-13 03:48:12.398847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.410 [2024-12-13 03:48:12.398875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.410 [2024-12-13 03:48:12.398886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.410 [2024-12-13 03:48:12.405257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.410 [2024-12-13 03:48:12.405290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.411 [2024-12-13 03:48:12.405303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.411 [2024-12-13 03:48:12.412507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.411 [2024-12-13 03:48:12.412535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.411 [2024-12-13 03:48:12.412547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.411 [2024-12-13 03:48:12.420660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.411 [2024-12-13 03:48:12.420689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.411 [2024-12-13 03:48:12.420702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.411 [2024-12-13 03:48:12.428978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.411 [2024-12-13 03:48:12.429004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.411 [2024-12-13 03:48:12.429017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.411 [2024-12-13 03:48:12.435555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.411 [2024-12-13 03:48:12.435581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.411 [2024-12-13 03:48:12.435592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.411 [2024-12-13 03:48:12.441625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.411 [2024-12-13 03:48:12.441651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.411 [2024-12-13 03:48:12.441663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.411 [2024-12-13 03:48:12.447691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.411 [2024-12-13 03:48:12.447717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.411 [2024-12-13 03:48:12.447729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.411 [2024-12-13 03:48:12.453793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.411 [2024-12-13 03:48:12.453819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.411 [2024-12-13 03:48:12.453831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.411 [2024-12-13 03:48:12.459770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.411 [2024-12-13 03:48:12.459795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.411 [2024-12-13 03:48:12.459807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.411 [2024-12-13 03:48:12.465759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.411 [2024-12-13 03:48:12.465784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.411 [2024-12-13 03:48:12.465800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.411 [2024-12-13 03:48:12.471805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.411 [2024-12-13 03:48:12.471830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.411 [2024-12-13 03:48:12.471841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.411 [2024-12-13 03:48:12.477783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.411 [2024-12-13 03:48:12.477809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.411 [2024-12-13 03:48:12.477821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.411 [2024-12-13 03:48:12.483761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.411 [2024-12-13 03:48:12.483788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.411 [2024-12-13 03:48:12.483799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.411 [2024-12-13 03:48:12.489831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.411 [2024-12-13 03:48:12.489857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.411 [2024-12-13 03:48:12.489869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.411 [2024-12-13 03:48:12.495829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.411 [2024-12-13 03:48:12.495854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.411 [2024-12-13 03:48:12.495866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.411 [2024-12-13 03:48:12.501825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.411 [2024-12-13 03:48:12.501859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.411 [2024-12-13 03:48:12.501871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.411 [2024-12-13 03:48:12.507780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.411 [2024-12-13 03:48:12.507805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.411 [2024-12-13 03:48:12.507816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.411 [2024-12-13 03:48:12.513750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.411 [2024-12-13 03:48:12.513776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.411 [2024-12-13 03:48:12.513788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.411 [2024-12-13 03:48:12.519761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.411 [2024-12-13 03:48:12.519788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.411 [2024-12-13 03:48:12.519799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.411 [2024-12-13 03:48:12.525774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.411 [2024-12-13 03:48:12.525800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.411 [2024-12-13 03:48:12.525811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.411 [2024-12-13 03:48:12.531672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.411 [2024-12-13 03:48:12.531698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.411 [2024-12-13 03:48:12.531709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.411 [2024-12-13 03:48:12.537688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.411 [2024-12-13 03:48:12.537715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.411 [2024-12-13 03:48:12.537726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.411 [2024-12-13 03:48:12.543671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.411 [2024-12-13 03:48:12.543697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.411 [2024-12-13 03:48:12.543708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.411 [2024-12-13 03:48:12.549650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.411 [2024-12-13 03:48:12.549676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.411 [2024-12-13 03:48:12.549687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.411 [2024-12-13 03:48:12.555741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.411 [2024-12-13 03:48:12.555766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.411 [2024-12-13 03:48:12.555777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.411 [2024-12-13 03:48:12.561713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.411 [2024-12-13 03:48:12.561739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.411 [2024-12-13 03:48:12.561750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.412 [2024-12-13 03:48:12.567765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.412 [2024-12-13 03:48:12.567790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.412 [2024-12-13 03:48:12.567806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.412 [2024-12-13 03:48:12.573745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.412 [2024-12-13 03:48:12.573771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.412 [2024-12-13 03:48:12.573783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.412 [2024-12-13 03:48:12.579768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.412 [2024-12-13 03:48:12.579794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.412 [2024-12-13 03:48:12.579804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.412 [2024-12-13 03:48:12.585775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.412 [2024-12-13 03:48:12.585801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.412 [2024-12-13 03:48:12.585812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.412 [2024-12-13 03:48:12.591808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.412 [2024-12-13 03:48:12.591834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.412 [2024-12-13 03:48:12.591845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.412 [2024-12-13 03:48:12.597911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.412 [2024-12-13 03:48:12.597943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.412 [2024-12-13 03:48:12.597954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.412 [2024-12-13 03:48:12.603271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.412 [2024-12-13 03:48:12.603298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.412 [2024-12-13 03:48:12.603310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.412 [2024-12-13 03:48:12.609329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.412 [2024-12-13 03:48:12.609355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.412 [2024-12-13 03:48:12.609367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.412 [2024-12-13 03:48:12.615242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.412 [2024-12-13 03:48:12.615269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.412 [2024-12-13 03:48:12.615280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.672 [2024-12-13 03:48:12.621709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.672 [2024-12-13 03:48:12.621736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.672 [2024-12-13 03:48:12.621748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.672 [2024-12-13 03:48:12.627831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.672 [2024-12-13 03:48:12.627857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.672 [2024-12-13 03:48:12.627869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.672 [2024-12-13 03:48:12.633815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.672 [2024-12-13 03:48:12.633842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.672 [2024-12-13 03:48:12.633853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.672 [2024-12-13 03:48:12.639934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.672 [2024-12-13 03:48:12.639960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.672 [2024-12-13 03:48:12.639971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.672 [2024-12-13 03:48:12.646063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.672 [2024-12-13 03:48:12.646089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.672 [2024-12-13 03:48:12.646101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.672 [2024-12-13 03:48:12.652195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.672 [2024-12-13 03:48:12.652221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.672 [2024-12-13 03:48:12.652233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.672 [2024-12-13 03:48:12.658250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.672 [2024-12-13 03:48:12.658276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.672 [2024-12-13 03:48:12.658288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.672 [2024-12-13 03:48:12.664265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.672 [2024-12-13 03:48:12.664292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.672 [2024-12-13 03:48:12.664303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.672 [2024-12-13 03:48:12.670318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.672 [2024-12-13 03:48:12.670344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.672 [2024-12-13 03:48:12.670359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.672 [2024-12-13 03:48:12.676458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.672 [2024-12-13 03:48:12.676485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.672 [2024-12-13 03:48:12.676496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.672 [2024-12-13 03:48:12.682617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.672 [2024-12-13 03:48:12.682644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.672 [2024-12-13 03:48:12.682657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.672 [2024-12-13 03:48:12.688674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.672 [2024-12-13 03:48:12.688700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.672 [2024-12-13 03:48:12.688711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.672 [2024-12-13 03:48:12.694789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.672 [2024-12-13 03:48:12.694816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.672 [2024-12-13 03:48:12.694828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.672 [2024-12-13 03:48:12.700825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.672 [2024-12-13 03:48:12.700851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.672 [2024-12-13 03:48:12.700862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.672 [2024-12-13 03:48:12.706815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.672 [2024-12-13 03:48:12.706842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.672 [2024-12-13 03:48:12.706853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.673 [2024-12-13 03:48:12.712830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.673 [2024-12-13 03:48:12.712856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.673 [2024-12-13 03:48:12.712868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.673 [2024-12-13 03:48:12.718423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.673 [2024-12-13 03:48:12.718451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.673 [2024-12-13 03:48:12.718463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.673 [2024-12-13 03:48:12.724383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.673 [2024-12-13 03:48:12.724414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.673 [2024-12-13 03:48:12.724426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.673 [2024-12-13 03:48:12.730157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.673 [2024-12-13 03:48:12.730182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.673 [2024-12-13 03:48:12.730194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.673 [2024-12-13 03:48:12.735921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.673 [2024-12-13 03:48:12.735947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.673 [2024-12-13 03:48:12.735959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.673 [2024-12-13 03:48:12.741807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.673 [2024-12-13 03:48:12.741833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.673 [2024-12-13 03:48:12.741844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.673 [2024-12-13 03:48:12.747817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.673 [2024-12-13 03:48:12.747844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.673 [2024-12-13 03:48:12.747855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.673 [2024-12-13 03:48:12.753927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.673 [2024-12-13 03:48:12.753953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.673 [2024-12-13 03:48:12.753964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.673 [2024-12-13 03:48:12.760040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.673 [2024-12-13 03:48:12.760067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.673 [2024-12-13 03:48:12.760079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.673 [2024-12-13 03:48:12.766064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.673 [2024-12-13 03:48:12.766090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.673 [2024-12-13 03:48:12.766102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.673 [2024-12-13 03:48:12.772051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.673 [2024-12-13 03:48:12.772076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.673 [2024-12-13 03:48:12.772092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.673 [2024-12-13 03:48:12.778040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.673 [2024-12-13 03:48:12.778065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.673 [2024-12-13 03:48:12.778076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.673 [2024-12-13 03:48:12.784101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.673 [2024-12-13 03:48:12.784128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.673 [2024-12-13 03:48:12.784139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.673 [2024-12-13 03:48:12.790000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.673 [2024-12-13 03:48:12.790025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.673 [2024-12-13 03:48:12.790037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.673 [2024-12-13 03:48:12.793315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.673 [2024-12-13 03:48:12.793340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.673 [2024-12-13 03:48:12.793352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.673 [2024-12-13 03:48:12.799356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.673 [2024-12-13 03:48:12.799382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.673 [2024-12-13 03:48:12.799394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.673 [2024-12-13 03:48:12.806113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.673 [2024-12-13 03:48:12.806139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.673 [2024-12-13 03:48:12.806151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.673 [2024-12-13 03:48:12.811478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.673 [2024-12-13 03:48:12.811504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.673 [2024-12-13 03:48:12.811516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.673 [2024-12-13 03:48:12.817514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.673 [2024-12-13 03:48:12.817540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.673 [2024-12-13 03:48:12.817552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.673 [2024-12-13 03:48:12.824179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.673 [2024-12-13 03:48:12.824211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.673 [2024-12-13 03:48:12.824224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.673 [2024-12-13 03:48:12.830405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.673 [2024-12-13 03:48:12.830432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.673 [2024-12-13 03:48:12.830445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.673 [2024-12-13 03:48:12.839070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.673 [2024-12-13 03:48:12.839098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.673 [2024-12-13 03:48:12.839109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.673 [2024-12-13 03:48:12.846982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.673 [2024-12-13 03:48:12.847008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.673 [2024-12-13 03:48:12.847020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.673 [2024-12-13 03:48:12.854740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.673 [2024-12-13 03:48:12.854766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.673 [2024-12-13 03:48:12.854778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.673 [2024-12-13 03:48:12.860970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.673 [2024-12-13 03:48:12.860995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.673 [2024-12-13 03:48:12.861007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.673 [2024-12-13 03:48:12.866962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.674 [2024-12-13 03:48:12.866987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.674 [2024-12-13 03:48:12.866999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.674 [2024-12-13 03:48:12.872953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.674 [2024-12-13 03:48:12.872978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.674 [2024-12-13 03:48:12.872990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.934 [2024-12-13 03:48:12.879821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.934 [2024-12-13 03:48:12.879848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.934 [2024-12-13 03:48:12.879868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.934 [2024-12-13 03:48:12.885149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.934 [2024-12-13 03:48:12.885175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.934 [2024-12-13 03:48:12.885187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.934 [2024-12-13 03:48:12.891175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.934 [2024-12-13 03:48:12.891201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.934 [2024-12-13 03:48:12.891212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.934 [2024-12-13 03:48:12.897150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.934 [2024-12-13 03:48:12.897176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.934 [2024-12-13 03:48:12.897188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.934 [2024-12-13 03:48:12.903190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.934 [2024-12-13 03:48:12.903214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.934 [2024-12-13 03:48:12.903226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.934 [2024-12-13 03:48:12.909262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.934 [2024-12-13 03:48:12.909287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.934 [2024-12-13 03:48:12.909298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.934 [2024-12-13 03:48:12.915182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.934 [2024-12-13 03:48:12.915207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.934 [2024-12-13 03:48:12.915219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.934 [2024-12-13 03:48:12.921132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.934 [2024-12-13 03:48:12.921156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.934 [2024-12-13 03:48:12.921168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.934 [2024-12-13 03:48:12.927156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.934 [2024-12-13 03:48:12.927181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.934 [2024-12-13 03:48:12.927192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.934 [2024-12-13 03:48:12.933193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.934 [2024-12-13 03:48:12.933223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.934 [2024-12-13 03:48:12.933235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.934 [2024-12-13 03:48:12.938804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.934 [2024-12-13 03:48:12.938830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.934 [2024-12-13 03:48:12.938841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.934 [2024-12-13 03:48:12.945142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.934 [2024-12-13 03:48:12.945169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.934 [2024-12-13 03:48:12.945180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.934 [2024-12-13 03:48:12.951252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.934 [2024-12-13 03:48:12.951279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.934 [2024-12-13 03:48:12.951291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.934 [2024-12-13 03:48:12.957199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.934 [2024-12-13 03:48:12.957225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.934 [2024-12-13 03:48:12.957236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.934 [2024-12-13 03:48:12.963214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.934 [2024-12-13 03:48:12.963240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.934 [2024-12-13 03:48:12.963252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.934 [2024-12-13 03:48:12.968949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.934 [2024-12-13 03:48:12.968974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.934 [2024-12-13 03:48:12.968986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.934 [2024-12-13 03:48:12.974870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.935 [2024-12-13 03:48:12.974896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.935 [2024-12-13 03:48:12.974908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.935 [2024-12-13 03:48:12.981044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.935 [2024-12-13 03:48:12.981070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.935 [2024-12-13 03:48:12.981082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.935 [2024-12-13 03:48:12.987179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.935 [2024-12-13 03:48:12.987205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.935 [2024-12-13 03:48:12.987217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.935 [2024-12-13 03:48:12.993149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.935 [2024-12-13 03:48:12.993175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.935 [2024-12-13 03:48:12.993186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.935 [2024-12-13 03:48:12.999486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.935 [2024-12-13 03:48:12.999512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.935 [2024-12-13 03:48:12.999523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.935 [2024-12-13 03:48:13.005707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.935 [2024-12-13 03:48:13.005733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.935 [2024-12-13 03:48:13.005744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.935 [2024-12-13 03:48:13.011929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.935 [2024-12-13 03:48:13.011954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.935 [2024-12-13 03:48:13.011965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.935 [2024-12-13 03:48:13.018347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.935 [2024-12-13 03:48:13.018373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.935 [2024-12-13 03:48:13.018408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.935 [2024-12-13 03:48:13.024708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.935 [2024-12-13 03:48:13.024735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.935 [2024-12-13 03:48:13.024746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.935 [2024-12-13 03:48:13.030962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.935 [2024-12-13 03:48:13.030988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.935 [2024-12-13 03:48:13.030999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.935 [2024-12-13 03:48:13.037632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.935 [2024-12-13 03:48:13.037662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.935 [2024-12-13 03:48:13.037674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.935 [2024-12-13 03:48:13.044128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.935 [2024-12-13 03:48:13.044154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.935 [2024-12-13 03:48:13.044166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.935 [2024-12-13 03:48:13.050575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.935 [2024-12-13 03:48:13.050601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.935 [2024-12-13 03:48:13.050613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.935 [2024-12-13 03:48:13.056846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.935 [2024-12-13 03:48:13.056872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.935 [2024-12-13 03:48:13.056883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.935 [2024-12-13 03:48:13.063066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.935 [2024-12-13 03:48:13.063092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.935 [2024-12-13 03:48:13.063104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.935 [2024-12-13 03:48:13.069251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.935 [2024-12-13 03:48:13.069277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.935 [2024-12-13 03:48:13.069288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.935 [2024-12-13 03:48:13.075552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.935 [2024-12-13 03:48:13.075579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.935 [2024-12-13 03:48:13.075590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.935 [2024-12-13 03:48:13.081932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.935 [2024-12-13 03:48:13.081958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.935 [2024-12-13 03:48:13.081969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.935 [2024-12-13 03:48:13.088437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.935 [2024-12-13 03:48:13.088464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.935 [2024-12-13 03:48:13.088475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.935 [2024-12-13 03:48:13.094777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.935 [2024-12-13 03:48:13.094805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.935 [2024-12-13 03:48:13.094817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.935 [2024-12-13 03:48:13.101218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.935 [2024-12-13 03:48:13.101245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.935 [2024-12-13 03:48:13.101256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.935 [2024-12-13 03:48:13.108267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.935 [2024-12-13 03:48:13.108295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.935 [2024-12-13 03:48:13.108306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.935 [2024-12-13 03:48:13.114515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.935 [2024-12-13 03:48:13.114541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.935 [2024-12-13 03:48:13.114553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:11.935 [2024-12-13 03:48:13.120753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.935 [2024-12-13 03:48:13.120779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.935 [2024-12-13 03:48:13.120791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:11.935 [2024-12-13 03:48:13.126856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.935 [2024-12-13 03:48:13.126883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.935 [2024-12-13 03:48:13.126895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:11.935 [2024-12-13 03:48:13.132980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.936 [2024-12-13 03:48:13.133006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.936 [2024-12-13 03:48:13.133017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:11.936 [2024-12-13 03:48:13.139401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:11.936 [2024-12-13 03:48:13.139429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:11.936 [2024-12-13 03:48:13.139442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:12.196 [2024-12-13 03:48:13.145739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.196 [2024-12-13 03:48:13.145770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.196 [2024-12-13 03:48:13.145782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:12.196 [2024-12-13 03:48:13.152209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.196 [2024-12-13 03:48:13.152237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.196 [2024-12-13 03:48:13.152248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:12.196 [2024-12-13 03:48:13.158098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.196 [2024-12-13 03:48:13.158126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.196 [2024-12-13 03:48:13.158138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:12.196 [2024-12-13 03:48:13.164171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.196 [2024-12-13 03:48:13.164198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.196 [2024-12-13 03:48:13.164210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:12.196 [2024-12-13 03:48:13.170436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.196 [2024-12-13 03:48:13.170463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.196 [2024-12-13 03:48:13.170474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:12.196 [2024-12-13 03:48:13.176777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.196 [2024-12-13 03:48:13.176805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.196 [2024-12-13 03:48:13.176817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:12.196 [2024-12-13 03:48:13.182973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.196 [2024-12-13 03:48:13.182999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.196 [2024-12-13 03:48:13.183011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:12.196 [2024-12-13 03:48:13.189222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.196 [2024-12-13 03:48:13.189249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.196 [2024-12-13 03:48:13.189261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:12.196 [2024-12-13 03:48:13.195444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.196 [2024-12-13 03:48:13.195471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.197 [2024-12-13 03:48:13.195483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:12.197 [2024-12-13 03:48:13.201789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.197 [2024-12-13 03:48:13.201816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.197 [2024-12-13 03:48:13.201828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:12.197 [2024-12-13 03:48:13.208065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.197 [2024-12-13 03:48:13.208092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.197 [2024-12-13 03:48:13.208104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:12.197 [2024-12-13 03:48:13.214614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.197 [2024-12-13 03:48:13.214643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.197 [2024-12-13 03:48:13.214654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:12.197 [2024-12-13 03:48:13.220927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.197 [2024-12-13 03:48:13.220954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.197 [2024-12-13 03:48:13.220966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:12.197 [2024-12-13 03:48:13.227244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.197 [2024-12-13 03:48:13.227271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.197 [2024-12-13 03:48:13.227284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:12.197 [2024-12-13 03:48:13.233559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.197 [2024-12-13 03:48:13.233585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.197 [2024-12-13 03:48:13.233598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:12.197 [2024-12-13 03:48:13.239728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.197 [2024-12-13 03:48:13.239756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.197 [2024-12-13 03:48:13.239768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:12.197 [2024-12-13 03:48:13.246655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.197 [2024-12-13 03:48:13.246682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.197 [2024-12-13 03:48:13.246695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:12.197 [2024-12-13 03:48:13.252825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.197 [2024-12-13 03:48:13.252852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.197 [2024-12-13 03:48:13.252870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:12.197 [2024-12-13 03:48:13.259020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.197 [2024-12-13 03:48:13.259046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.197 [2024-12-13 03:48:13.259058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:12.197 [2024-12-13 03:48:13.265130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.197 [2024-12-13 03:48:13.265157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.197 [2024-12-13 03:48:13.265168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:12.197 [2024-12-13 03:48:13.271203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.197 [2024-12-13 03:48:13.271231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.197 [2024-12-13 03:48:13.271242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:12.197 [2024-12-13 03:48:13.277349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.197 [2024-12-13 03:48:13.277376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.197 [2024-12-13 03:48:13.277387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:12.197 [2024-12-13 03:48:13.283591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.197 [2024-12-13 03:48:13.283619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.197 [2024-12-13 03:48:13.283631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:12.197 [2024-12-13 03:48:13.289806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.197 [2024-12-13 03:48:13.289839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.197 [2024-12-13 03:48:13.289851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:12.197 [2024-12-13 03:48:13.295939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.197 [2024-12-13 03:48:13.295966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.197 [2024-12-13 03:48:13.295978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:12.197 [2024-12-13 03:48:13.302096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.197 [2024-12-13 03:48:13.302134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.197 [2024-12-13 03:48:13.302145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:12.197 [2024-12-13 03:48:13.309125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.197 [2024-12-13 03:48:13.309151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.197 [2024-12-13 03:48:13.309164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:12.197 [2024-12-13 03:48:13.316747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.197 [2024-12-13 03:48:13.316774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.197 [2024-12-13 03:48:13.316785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:12.197 [2024-12-13 03:48:13.323886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.197 [2024-12-13 03:48:13.323913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.197 [2024-12-13 03:48:13.323932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:12.197 [2024-12-13 03:48:13.331751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.197 [2024-12-13 03:48:13.331780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.197 [2024-12-13 03:48:13.331792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:12.197 [2024-12-13 03:48:13.340414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.197 [2024-12-13 03:48:13.340444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.197 [2024-12-13 03:48:13.340455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:12.197 [2024-12-13 03:48:13.348878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.197 [2024-12-13 03:48:13.348907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.197 [2024-12-13 03:48:13.348925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:12.197 [2024-12-13 03:48:13.357448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.197 [2024-12-13 03:48:13.357476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.197 [2024-12-13 03:48:13.357489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:12.197 [2024-12-13 03:48:13.366526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.197 [2024-12-13 03:48:13.366554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.198 [2024-12-13 03:48:13.366567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:12.198 [2024-12-13 03:48:13.374970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.198 [2024-12-13 03:48:13.374998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.198 [2024-12-13 03:48:13.375026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:12.198 [2024-12-13 03:48:13.383366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.198 [2024-12-13 03:48:13.383395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.198 [2024-12-13 03:48:13.383407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:12.198 [2024-12-13 03:48:13.392306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.198 [2024-12-13 03:48:13.392335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.198 [2024-12-13 03:48:13.392347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:12.198 4987.50 IOPS, 623.44 MiB/s [2024-12-13T02:48:13.407Z] [2024-12-13 03:48:13.402156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:12.198 [2024-12-13 03:48:13.402185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:12.198 [2024-12-13 03:48:13.402197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:12.456 00:37:12.456 Latency(us) 00:37:12.456 [2024-12-13T02:48:13.665Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:12.456 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:37:12.456 nvme0n1 : 2.00 4984.68 623.08 0.00 0.00 3206.31 741.18 12358.22 00:37:12.456 [2024-12-13T02:48:13.665Z] =================================================================================================================== 00:37:12.456 [2024-12-13T02:48:13.665Z] Total : 4984.68 623.08 0.00 0.00 3206.31 741.18 12358.22 00:37:12.456 { 00:37:12.456 "results": [ 00:37:12.456 { 00:37:12.456 "job": "nvme0n1", 00:37:12.456 "core_mask": "0x2", 00:37:12.456 "workload": "randread", 00:37:12.456 "status": "finished", 00:37:12.456 "queue_depth": 16, 00:37:12.456 "io_size": 131072, 00:37:12.456 "runtime": 2.004343, 00:37:12.456 "iops": 4984.675776551219, 00:37:12.456 "mibps": 623.0844720689024, 00:37:12.456 "io_failed": 0, 00:37:12.456 "io_timeout": 0, 00:37:12.456 "avg_latency_us": 3206.3134489612075, 00:37:12.456 "min_latency_us": 741.1809523809524, 00:37:12.456 "max_latency_us": 12358.217142857144 00:37:12.456 } 00:37:12.456 ], 00:37:12.456 "core_count": 1 00:37:12.456 } 00:37:12.456 03:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:12.456 03:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:12.456 03:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:12.456 | .driver_specific 00:37:12.456 | .nvme_error 00:37:12.456 | .status_code 00:37:12.456 | .command_transient_transport_error' 00:37:12.456 03:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:12.456 03:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 323 > 0 )) 00:37:12.456 03:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2905224 00:37:12.456 03:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2905224 ']' 00:37:12.456 03:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2905224 00:37:12.456 03:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:12.456 03:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:12.456 03:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2905224 00:37:12.714 03:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:12.714 03:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:12.714 03:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2905224' 00:37:12.714 killing process with pid 2905224 00:37:12.714 03:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2905224 00:37:12.714 Received shutdown signal, test time was about 2.000000 seconds 00:37:12.714 00:37:12.714 Latency(us) 00:37:12.714 [2024-12-13T02:48:13.923Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:12.714 [2024-12-13T02:48:13.923Z] =================================================================================================================== 00:37:12.714 [2024-12-13T02:48:13.923Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:12.714 03:48:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2905224 00:37:13.651 03:48:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:37:13.651 03:48:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:13.651 03:48:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:37:13.651 03:48:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:37:13.651 03:48:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:37:13.651 03:48:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2906020 00:37:13.651 03:48:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2906020 /var/tmp/bperf.sock 00:37:13.651 03:48:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:37:13.651 03:48:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2906020 ']' 00:37:13.651 03:48:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:13.651 03:48:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:13.651 03:48:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:13.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:13.651 03:48:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:13.651 03:48:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:13.651 [2024-12-13 03:48:14.640354] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:37:13.651 [2024-12-13 03:48:14.640443] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2906020 ] 00:37:13.651 [2024-12-13 03:48:14.755502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:13.911 [2024-12-13 03:48:14.860812] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:14.476 03:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:14.476 03:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:14.476 03:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:14.476 03:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:14.476 03:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:14.476 03:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.476 03:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:14.476 03:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.476 03:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:14.476 03:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:15.044 nvme0n1 00:37:15.044 03:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:37:15.044 03:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.044 03:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:15.044 03:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.044 03:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:15.044 03:48:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:15.044 Running I/O for 2 seconds... 00:37:15.044 [2024-12-13 03:48:16.100351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0630 00:37:15.044 [2024-12-13 03:48:16.101145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.044 [2024-12-13 03:48:16.101197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:15.044 [2024-12-13 03:48:16.110216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fcdd0 00:37:15.044 [2024-12-13 03:48:16.110930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.044 [2024-12-13 03:48:16.110962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:15.044 [2024-12-13 03:48:16.121123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4578 00:37:15.044 [2024-12-13 03:48:16.122069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.044 [2024-12-13 03:48:16.122097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:37:15.044 [2024-12-13 03:48:16.132578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eff18 00:37:15.044 [2024-12-13 03:48:16.133313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.044 [2024-12-13 03:48:16.133340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:15.044 [2024-12-13 03:48:16.143212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f8618 00:37:15.045 [2024-12-13 03:48:16.144320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.045 [2024-12-13 03:48:16.144347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:15.045 [2024-12-13 03:48:16.153754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3d08 00:37:15.045 [2024-12-13 03:48:16.154762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.045 [2024-12-13 03:48:16.154788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:15.045 [2024-12-13 03:48:16.163318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee190 00:37:15.045 [2024-12-13 03:48:16.164361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.045 [2024-12-13 03:48:16.164387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:37:15.045 [2024-12-13 03:48:16.174147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee5c8 00:37:15.045 [2024-12-13 03:48:16.175362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.045 [2024-12-13 03:48:16.175388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:37:15.045 [2024-12-13 03:48:16.184564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e8088 00:37:15.045 [2024-12-13 03:48:16.186016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.045 [2024-12-13 03:48:16.186040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:15.045 [2024-12-13 03:48:16.193497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4140 00:37:15.045 [2024-12-13 03:48:16.194226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.045 [2024-12-13 03:48:16.194251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:37:15.045 [2024-12-13 03:48:16.204820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e7c50 00:37:15.045 [2024-12-13 03:48:16.205387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.045 [2024-12-13 03:48:16.205412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:15.045 [2024-12-13 03:48:16.215712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173dfdc0 00:37:15.045 [2024-12-13 03:48:16.216405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.045 [2024-12-13 03:48:16.216430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:37:15.045 [2024-12-13 03:48:16.228059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e95a0 00:37:15.045 [2024-12-13 03:48:16.229704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.045 [2024-12-13 03:48:16.229729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:37:15.045 [2024-12-13 03:48:16.239101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1710 00:37:15.045 [2024-12-13 03:48:16.240789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.045 [2024-12-13 03:48:16.240815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:15.045 [2024-12-13 03:48:16.246562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e49b0 00:37:15.045 [2024-12-13 03:48:16.247508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.045 [2024-12-13 03:48:16.247532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:15.305 [2024-12-13 03:48:16.258609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0630 00:37:15.305 [2024-12-13 03:48:16.259574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.305 [2024-12-13 03:48:16.259600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:37:15.305 [2024-12-13 03:48:16.269397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f81e0 00:37:15.305 [2024-12-13 03:48:16.270494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.305 [2024-12-13 03:48:16.270520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:37:15.305 [2024-12-13 03:48:16.279930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f2510 00:37:15.305 [2024-12-13 03:48:16.281068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.305 [2024-12-13 03:48:16.281094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:15.305 [2024-12-13 03:48:16.290369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f2510 00:37:15.305 [2024-12-13 03:48:16.291588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.305 [2024-12-13 03:48:16.291614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:15.305 [2024-12-13 03:48:16.301119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6890 00:37:15.305 [2024-12-13 03:48:16.302365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.305 [2024-12-13 03:48:16.302390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:15.305 [2024-12-13 03:48:16.310931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eaef0 00:37:15.305 [2024-12-13 03:48:16.312177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.305 [2024-12-13 03:48:16.312203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:37:15.305 [2024-12-13 03:48:16.319968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7100 00:37:15.305 [2024-12-13 03:48:16.320746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.305 [2024-12-13 03:48:16.320775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:15.305 [2024-12-13 03:48:16.329819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fbcf0 00:37:15.305 [2024-12-13 03:48:16.330553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.305 [2024-12-13 03:48:16.330578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:37:15.305 [2024-12-13 03:48:16.341959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4298 00:37:15.305 [2024-12-13 03:48:16.342900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.305 [2024-12-13 03:48:16.342931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:37:15.305 [2024-12-13 03:48:16.352257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f2510 00:37:15.305 [2024-12-13 03:48:16.353324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.305 [2024-12-13 03:48:16.353350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:37:15.305 [2024-12-13 03:48:16.362226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e9e10 00:37:15.305 [2024-12-13 03:48:16.363188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.305 [2024-12-13 03:48:16.363214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:37:15.305 [2024-12-13 03:48:16.373696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4140 00:37:15.305 [2024-12-13 03:48:16.374505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.305 [2024-12-13 03:48:16.374530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:15.305 [2024-12-13 03:48:16.384456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee5c8 00:37:15.305 [2024-12-13 03:48:16.385727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.305 [2024-12-13 03:48:16.385752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:37:15.305 [2024-12-13 03:48:16.395326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0ff8 00:37:15.305 [2024-12-13 03:48:16.396658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.305 [2024-12-13 03:48:16.396684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:15.305 [2024-12-13 03:48:16.404340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e9168 00:37:15.305 [2024-12-13 03:48:16.405156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.305 [2024-12-13 03:48:16.405181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:15.305 [2024-12-13 03:48:16.414058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7da8 00:37:15.305 [2024-12-13 03:48:16.414865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.305 [2024-12-13 03:48:16.414890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:37:15.305 [2024-12-13 03:48:16.424903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0350 00:37:15.305 [2024-12-13 03:48:16.425939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.305 [2024-12-13 03:48:16.425965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:37:15.305 [2024-12-13 03:48:16.435838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ea680 00:37:15.305 [2024-12-13 03:48:16.436983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.305 [2024-12-13 03:48:16.437007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:37:15.305 [2024-12-13 03:48:16.446430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ebfd0 00:37:15.305 [2024-12-13 03:48:16.447076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.305 [2024-12-13 03:48:16.447101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:37:15.305 [2024-12-13 03:48:16.457289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fd640 00:37:15.305 [2024-12-13 03:48:16.458464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.305 [2024-12-13 03:48:16.458489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:37:15.305 [2024-12-13 03:48:16.468178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f3a28 00:37:15.305 [2024-12-13 03:48:16.469470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.305 [2024-12-13 03:48:16.469497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:15.305 [2024-12-13 03:48:16.476973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4578 00:37:15.305 [2024-12-13 03:48:16.477486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.305 [2024-12-13 03:48:16.477511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:15.305 [2024-12-13 03:48:16.487862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ddc00 00:37:15.305 [2024-12-13 03:48:16.488490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.305 [2024-12-13 03:48:16.488515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:37:15.305 [2024-12-13 03:48:16.498759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb480 00:37:15.305 [2024-12-13 03:48:16.499526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.305 [2024-12-13 03:48:16.499555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:15.305 [2024-12-13 03:48:16.508660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e95a0 00:37:15.305 [2024-12-13 03:48:16.510047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.306 [2024-12-13 03:48:16.510072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:15.565 [2024-12-13 03:48:16.517927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fd640 00:37:15.565 [2024-12-13 03:48:16.518651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.565 [2024-12-13 03:48:16.518676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:15.565 [2024-12-13 03:48:16.528884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fef90 00:37:15.565 [2024-12-13 03:48:16.529765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.565 [2024-12-13 03:48:16.529790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:37:15.565 [2024-12-13 03:48:16.540210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4f40 00:37:15.565 [2024-12-13 03:48:16.540824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.565 [2024-12-13 03:48:16.540849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:37:15.565 [2024-12-13 03:48:16.550701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5ec8 00:37:15.565 [2024-12-13 03:48:16.551606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:45 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.565 [2024-12-13 03:48:16.551632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:15.565 [2024-12-13 03:48:16.561130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5ec8 00:37:15.565 [2024-12-13 03:48:16.562070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.565 [2024-12-13 03:48:16.562095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:15.565 [2024-12-13 03:48:16.571489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5ec8 00:37:15.565 [2024-12-13 03:48:16.572496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.565 [2024-12-13 03:48:16.572522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:15.565 [2024-12-13 03:48:16.581202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3060 00:37:15.565 [2024-12-13 03:48:16.582237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.565 [2024-12-13 03:48:16.582262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:15.565 [2024-12-13 03:48:16.592135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f3e60 00:37:15.565 [2024-12-13 03:48:16.593268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.565 [2024-12-13 03:48:16.593293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:37:15.565 [2024-12-13 03:48:16.603105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df550 00:37:15.565 [2024-12-13 03:48:16.604371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.565 [2024-12-13 03:48:16.604396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:15.565 [2024-12-13 03:48:16.614271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed4e8 00:37:15.565 [2024-12-13 03:48:16.615704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.565 [2024-12-13 03:48:16.615730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:15.565 [2024-12-13 03:48:16.625385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f20d8 00:37:15.566 [2024-12-13 03:48:16.626948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.566 [2024-12-13 03:48:16.626973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:15.566 [2024-12-13 03:48:16.634888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173feb58 00:37:15.566 [2024-12-13 03:48:16.636398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.566 [2024-12-13 03:48:16.636423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:15.566 [2024-12-13 03:48:16.643836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df550 00:37:15.566 [2024-12-13 03:48:16.644660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.566 [2024-12-13 03:48:16.644685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:37:15.566 [2024-12-13 03:48:16.654705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6cc8 00:37:15.566 [2024-12-13 03:48:16.655740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.566 [2024-12-13 03:48:16.655764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:37:15.566 [2024-12-13 03:48:16.665323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ecc78 00:37:15.566 [2024-12-13 03:48:16.665811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:18767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.566 [2024-12-13 03:48:16.665836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:15.566 [2024-12-13 03:48:16.676250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed920 00:37:15.566 [2024-12-13 03:48:16.676871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.566 [2024-12-13 03:48:16.676896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:37:15.566 [2024-12-13 03:48:16.687117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee5c8 00:37:15.566 [2024-12-13 03:48:16.687903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.566 [2024-12-13 03:48:16.687935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:15.566 [2024-12-13 03:48:16.696870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1710 00:37:15.566 [2024-12-13 03:48:16.698297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.566 [2024-12-13 03:48:16.698322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:15.566 [2024-12-13 03:48:16.707458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7970 00:37:15.566 [2024-12-13 03:48:16.708475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.566 [2024-12-13 03:48:16.708499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:15.566 [2024-12-13 03:48:16.717787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e73e0 00:37:15.566 [2024-12-13 03:48:16.718815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.566 [2024-12-13 03:48:16.718840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:15.566 [2024-12-13 03:48:16.728759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3060 00:37:15.566 [2024-12-13 03:48:16.729870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.566 [2024-12-13 03:48:16.729895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:37:15.566 [2024-12-13 03:48:16.739614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eff18 00:37:15.566 [2024-12-13 03:48:16.740934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.566 [2024-12-13 03:48:16.740959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:15.566 [2024-12-13 03:48:16.748416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ef6a8 00:37:15.566 [2024-12-13 03:48:16.748899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.566 [2024-12-13 03:48:16.748929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:15.566 [2024-12-13 03:48:16.759240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4b08 00:37:15.566 [2024-12-13 03:48:16.759894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.566 [2024-12-13 03:48:16.759926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:37:15.566 [2024-12-13 03:48:16.770193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4298 00:37:15.566 [2024-12-13 03:48:16.771013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.566 [2024-12-13 03:48:16.771042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:15.826 [2024-12-13 03:48:16.780730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e95a0 00:37:15.826 [2024-12-13 03:48:16.781856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.826 [2024-12-13 03:48:16.781880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:15.826 [2024-12-13 03:48:16.790701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4b08 00:37:15.826 [2024-12-13 03:48:16.791517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.826 [2024-12-13 03:48:16.791543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:15.826 [2024-12-13 03:48:16.801129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4b08 00:37:15.826 [2024-12-13 03:48:16.801915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.826 [2024-12-13 03:48:16.801945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:15.826 [2024-12-13 03:48:16.811822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e38d0 00:37:15.826 [2024-12-13 03:48:16.812840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.826 [2024-12-13 03:48:16.812866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:15.826 [2024-12-13 03:48:16.822663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3060 00:37:15.826 [2024-12-13 03:48:16.823749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.826 [2024-12-13 03:48:16.823775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:37:15.826 [2024-12-13 03:48:16.833554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fe720 00:37:15.826 [2024-12-13 03:48:16.834828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.826 [2024-12-13 03:48:16.834853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:15.826 [2024-12-13 03:48:16.842382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ef6a8 00:37:15.826 [2024-12-13 03:48:16.842870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.826 [2024-12-13 03:48:16.842894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:15.826 [2024-12-13 03:48:16.853250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1ca0 00:37:15.826 [2024-12-13 03:48:16.853901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.826 [2024-12-13 03:48:16.853932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:37:15.826 [2024-12-13 03:48:16.863898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df988 00:37:15.826 [2024-12-13 03:48:16.864892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.826 [2024-12-13 03:48:16.864923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:15.826 [2024-12-13 03:48:16.875079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0bc0 00:37:15.826 [2024-12-13 03:48:16.876386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.826 [2024-12-13 03:48:16.876411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:37:15.826 [2024-12-13 03:48:16.885727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f35f0 00:37:15.826 [2024-12-13 03:48:16.886500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.826 [2024-12-13 03:48:16.886525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:15.826 [2024-12-13 03:48:16.895543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc560 00:37:15.826 [2024-12-13 03:48:16.896970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.826 [2024-12-13 03:48:16.896993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:15.826 [2024-12-13 03:48:16.904502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4298 00:37:15.826 [2024-12-13 03:48:16.905230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.826 [2024-12-13 03:48:16.905254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:37:15.826 [2024-12-13 03:48:16.915371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eff18 00:37:15.826 [2024-12-13 03:48:16.916279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.826 [2024-12-13 03:48:16.916303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:37:15.826 [2024-12-13 03:48:16.926292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fd208 00:37:15.826 [2024-12-13 03:48:16.927310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.826 [2024-12-13 03:48:16.927336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:37:15.826 [2024-12-13 03:48:16.936834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f92c0 00:37:15.826 [2024-12-13 03:48:16.937349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.826 [2024-12-13 03:48:16.937374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:15.826 [2024-12-13 03:48:16.947322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4f40 00:37:15.826 [2024-12-13 03:48:16.948107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.826 [2024-12-13 03:48:16.948136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:37:15.826 [2024-12-13 03:48:16.957489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee190 00:37:15.826 [2024-12-13 03:48:16.958045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.826 [2024-12-13 03:48:16.958069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:15.826 [2024-12-13 03:48:16.968066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f9f68 00:37:15.826 [2024-12-13 03:48:16.968932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.826 [2024-12-13 03:48:16.968957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:37:15.826 [2024-12-13 03:48:16.979050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df988 00:37:15.826 [2024-12-13 03:48:16.980002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.826 [2024-12-13 03:48:16.980027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:15.826 [2024-12-13 03:48:16.989937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df550 00:37:15.826 [2024-12-13 03:48:16.991119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.826 [2024-12-13 03:48:16.991143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:37:15.826 [2024-12-13 03:48:17.000827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0a68 00:37:15.826 [2024-12-13 03:48:17.002111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.826 [2024-12-13 03:48:17.002137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:15.826 [2024-12-13 03:48:17.009377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e99d8 00:37:15.826 [2024-12-13 03:48:17.010033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.826 [2024-12-13 03:48:17.010057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:37:15.826 [2024-12-13 03:48:17.020053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ef270 00:37:15.826 [2024-12-13 03:48:17.020546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.826 [2024-12-13 03:48:17.020570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:15.826 [2024-12-13 03:48:17.031038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6b70 00:37:15.826 [2024-12-13 03:48:17.031689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.826 [2024-12-13 03:48:17.031714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:37:16.086 [2024-12-13 03:48:17.042267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fd208 00:37:16.086 [2024-12-13 03:48:17.043089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.086 [2024-12-13 03:48:17.043114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:16.086 [2024-12-13 03:48:17.052830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6738 00:37:16.086 [2024-12-13 03:48:17.053893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.086 [2024-12-13 03:48:17.053922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:16.086 [2024-12-13 03:48:17.062498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ea248 00:37:16.086 [2024-12-13 03:48:17.063926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.086 [2024-12-13 03:48:17.063950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:16.086 [2024-12-13 03:48:17.071454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eb760 00:37:16.087 [2024-12-13 03:48:17.072187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.087 [2024-12-13 03:48:17.072211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:37:16.087 [2024-12-13 03:48:17.082354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fa7d8 00:37:16.087 [2024-12-13 03:48:17.083231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.087 [2024-12-13 03:48:17.083256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:37:16.087 24233.00 IOPS, 94.66 MiB/s [2024-12-13T02:48:17.296Z] [2024-12-13 03:48:17.093772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173dfdc0 00:37:16.087 [2024-12-13 03:48:17.094569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.087 [2024-12-13 03:48:17.094595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:16.087 [2024-12-13 03:48:17.104179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e95a0 00:37:16.087 [2024-12-13 03:48:17.104980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.087 [2024-12-13 03:48:17.105006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:16.087 [2024-12-13 03:48:17.116266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fda78 00:37:16.087 [2024-12-13 03:48:17.117707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.087 [2024-12-13 03:48:17.117731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:16.087 [2024-12-13 03:48:17.127366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6458 00:37:16.087 [2024-12-13 03:48:17.128950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.087 [2024-12-13 03:48:17.128974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:37:16.087 [2024-12-13 03:48:17.138300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc128 00:37:16.087 [2024-12-13 03:48:17.140011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.087 [2024-12-13 03:48:17.140035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:16.087 [2024-12-13 03:48:17.145671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f8618 00:37:16.087 [2024-12-13 03:48:17.146413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.087 [2024-12-13 03:48:17.146438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:37:16.087 [2024-12-13 03:48:17.155557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e88f8 00:37:16.087 [2024-12-13 03:48:17.156253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.087 [2024-12-13 03:48:17.156279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:16.087 [2024-12-13 03:48:17.166391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb480 00:37:16.087 [2024-12-13 03:48:17.167274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.087 [2024-12-13 03:48:17.167299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:37:16.087 [2024-12-13 03:48:17.177352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e49b0 00:37:16.087 [2024-12-13 03:48:17.178364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.087 [2024-12-13 03:48:17.178389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:37:16.087 [2024-12-13 03:48:17.188434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e2c28 00:37:16.087 [2024-12-13 03:48:17.189549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.087 [2024-12-13 03:48:17.189573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:37:16.087 [2024-12-13 03:48:17.199289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e12d8 00:37:16.087 [2024-12-13 03:48:17.200607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.087 [2024-12-13 03:48:17.200632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:37:16.087 [2024-12-13 03:48:17.209951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0630 00:37:16.087 [2024-12-13 03:48:17.210717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.087 [2024-12-13 03:48:17.210741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:16.087 [2024-12-13 03:48:17.219680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e27f0 00:37:16.087 [2024-12-13 03:48:17.221155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.087 [2024-12-13 03:48:17.221179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:16.087 [2024-12-13 03:48:17.228861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7da8 00:37:16.087 [2024-12-13 03:48:17.229595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.087 [2024-12-13 03:48:17.229620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:37:16.087 [2024-12-13 03:48:17.239836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ef270 00:37:16.087 [2024-12-13 03:48:17.240715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.087 [2024-12-13 03:48:17.240739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:37:16.087 [2024-12-13 03:48:17.250696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ebfd0 00:37:16.087 [2024-12-13 03:48:17.251710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.087 [2024-12-13 03:48:17.251735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:37:16.087 [2024-12-13 03:48:17.261742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5220 00:37:16.087 [2024-12-13 03:48:17.262886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.087 [2024-12-13 03:48:17.262911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:37:16.087 [2024-12-13 03:48:17.272619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e7c50 00:37:16.087 [2024-12-13 03:48:17.273927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.087 [2024-12-13 03:48:17.273951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:37:16.087 [2024-12-13 03:48:17.283560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ef270 00:37:16.087 [2024-12-13 03:48:17.285015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.087 [2024-12-13 03:48:17.285040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:16.087 [2024-12-13 03:48:17.293158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed4e8 00:37:16.347 [2024-12-13 03:48:17.294718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.347 [2024-12-13 03:48:17.294745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:16.347 [2024-12-13 03:48:17.302502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5220 00:37:16.347 [2024-12-13 03:48:17.303229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.347 [2024-12-13 03:48:17.303254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:37:16.347 [2024-12-13 03:48:17.313402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fd208 00:37:16.347 [2024-12-13 03:48:17.314260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.347 [2024-12-13 03:48:17.314286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:37:16.347 [2024-12-13 03:48:17.324270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e99d8 00:37:16.347 [2024-12-13 03:48:17.325324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.347 [2024-12-13 03:48:17.325348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:37:16.347 [2024-12-13 03:48:17.335156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fcdd0 00:37:16.347 [2024-12-13 03:48:17.336304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.347 [2024-12-13 03:48:17.336328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:37:16.347 [2024-12-13 03:48:17.346062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f92c0 00:37:16.347 [2024-12-13 03:48:17.347337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.347 [2024-12-13 03:48:17.347362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:37:16.347 [2024-12-13 03:48:17.356958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fd208 00:37:16.347 [2024-12-13 03:48:17.358419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.347 [2024-12-13 03:48:17.358443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:16.347 [2024-12-13 03:48:17.366901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f31b8 00:37:16.348 [2024-12-13 03:48:17.367850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.348 [2024-12-13 03:48:17.367875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:37:16.348 [2024-12-13 03:48:17.377331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f9b30 00:37:16.348 [2024-12-13 03:48:17.378415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.348 [2024-12-13 03:48:17.378439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:37:16.348 [2024-12-13 03:48:17.388202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1f80 00:37:16.348 [2024-12-13 03:48:17.389010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.348 [2024-12-13 03:48:17.389034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:16.348 [2024-12-13 03:48:17.398029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e8d30 00:37:16.348 [2024-12-13 03:48:17.399399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.348 [2024-12-13 03:48:17.399426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:16.348 [2024-12-13 03:48:17.407012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e73e0 00:37:16.348 [2024-12-13 03:48:17.407720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.348 [2024-12-13 03:48:17.407745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:37:16.348 [2024-12-13 03:48:17.417923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e9e10 00:37:16.348 [2024-12-13 03:48:17.418806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.348 [2024-12-13 03:48:17.418831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:37:16.348 [2024-12-13 03:48:17.428858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0788 00:37:16.348 [2024-12-13 03:48:17.429848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.348 [2024-12-13 03:48:17.429874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:37:16.348 [2024-12-13 03:48:17.439785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7da8 00:37:16.348 [2024-12-13 03:48:17.440981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.348 [2024-12-13 03:48:17.441008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:37:16.348 [2024-12-13 03:48:17.450422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb8b8 00:37:16.348 [2024-12-13 03:48:17.451108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.348 [2024-12-13 03:48:17.451134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:16.348 [2024-12-13 03:48:17.460676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fdeb0 00:37:16.348 [2024-12-13 03:48:17.461819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.348 [2024-12-13 03:48:17.461844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:37:16.348 [2024-12-13 03:48:17.470762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7da8 00:37:16.348 [2024-12-13 03:48:17.471468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.348 [2024-12-13 03:48:17.471493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:16.348 [2024-12-13 03:48:17.481366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7da8 00:37:16.348 [2024-12-13 03:48:17.482167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.348 [2024-12-13 03:48:17.482202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:16.348 [2024-12-13 03:48:17.491792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7da8 00:37:16.348 [2024-12-13 03:48:17.492472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.348 [2024-12-13 03:48:17.492497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:16.348 [2024-12-13 03:48:17.502200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7da8 00:37:16.348 [2024-12-13 03:48:17.503031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.348 [2024-12-13 03:48:17.503055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:16.348 [2024-12-13 03:48:17.512657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7da8 00:37:16.348 [2024-12-13 03:48:17.513330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.348 [2024-12-13 03:48:17.513354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:16.348 [2024-12-13 03:48:17.524467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7da8 00:37:16.348 [2024-12-13 03:48:17.525682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.348 [2024-12-13 03:48:17.525706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:16.348 [2024-12-13 03:48:17.535368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7100 00:37:16.348 [2024-12-13 03:48:17.536832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.348 [2024-12-13 03:48:17.536858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:37:16.348 [2024-12-13 03:48:17.546354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4298 00:37:16.348 [2024-12-13 03:48:17.547840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.348 [2024-12-13 03:48:17.547865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:16.608 [2024-12-13 03:48:17.557580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1710 00:37:16.608 [2024-12-13 03:48:17.559361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.608 [2024-12-13 03:48:17.559387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:16.608 [2024-12-13 03:48:17.565072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e23b8 00:37:16.608 [2024-12-13 03:48:17.565801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.608 [2024-12-13 03:48:17.565826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:37:16.608 [2024-12-13 03:48:17.575939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ef270 00:37:16.608 [2024-12-13 03:48:17.576444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.608 [2024-12-13 03:48:17.576473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:16.608 [2024-12-13 03:48:17.589504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3060 00:37:16.608 [2024-12-13 03:48:17.591207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.608 [2024-12-13 03:48:17.591232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:16.608 [2024-12-13 03:48:17.598087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e2c28 00:37:16.608 [2024-12-13 03:48:17.599253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.608 [2024-12-13 03:48:17.599278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:37:16.608 [2024-12-13 03:48:17.609146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fa3a0 00:37:16.608 [2024-12-13 03:48:17.610455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.608 [2024-12-13 03:48:17.610481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:37:16.608 [2024-12-13 03:48:17.620304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1710 00:37:16.608 [2024-12-13 03:48:17.621759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.608 [2024-12-13 03:48:17.621783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:16.608 [2024-12-13 03:48:17.631494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7970 00:37:16.608 [2024-12-13 03:48:17.633051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.608 [2024-12-13 03:48:17.633075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:37:16.608 [2024-12-13 03:48:17.642598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f8e88 00:37:16.608 [2024-12-13 03:48:17.644270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.608 [2024-12-13 03:48:17.644294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:16.608 [2024-12-13 03:48:17.649978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fdeb0 00:37:16.608 [2024-12-13 03:48:17.650635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.608 [2024-12-13 03:48:17.650659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:37:16.608 [2024-12-13 03:48:17.660418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fda78 00:37:16.608 [2024-12-13 03:48:17.661342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.608 [2024-12-13 03:48:17.661367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:37:16.608 [2024-12-13 03:48:17.671369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb480 00:37:16.608 [2024-12-13 03:48:17.672407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.608 [2024-12-13 03:48:17.672432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:37:16.608 [2024-12-13 03:48:17.682343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f3e60 00:37:16.608 [2024-12-13 03:48:17.683445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.608 [2024-12-13 03:48:17.683470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:37:16.608 [2024-12-13 03:48:17.693555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4f40 00:37:16.608 [2024-12-13 03:48:17.694652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.608 [2024-12-13 03:48:17.694677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:16.608 [2024-12-13 03:48:17.703841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4f40 00:37:16.608 [2024-12-13 03:48:17.704941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.608 [2024-12-13 03:48:17.704965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:16.608 [2024-12-13 03:48:17.714296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4f40 00:37:16.608 [2024-12-13 03:48:17.715353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.608 [2024-12-13 03:48:17.715378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:16.608 [2024-12-13 03:48:17.724664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4f40 00:37:16.608 [2024-12-13 03:48:17.725819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.608 [2024-12-13 03:48:17.725843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:16.608 [2024-12-13 03:48:17.735076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4f40 00:37:16.608 [2024-12-13 03:48:17.736160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.608 [2024-12-13 03:48:17.736195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:16.608 [2024-12-13 03:48:17.745501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4f40 00:37:16.609 [2024-12-13 03:48:17.746561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.609 [2024-12-13 03:48:17.746586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:16.609 [2024-12-13 03:48:17.755938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4f40 00:37:16.609 [2024-12-13 03:48:17.757119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.609 [2024-12-13 03:48:17.757143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:16.609 [2024-12-13 03:48:17.766370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4f40 00:37:16.609 [2024-12-13 03:48:17.767512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.609 [2024-12-13 03:48:17.767537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:16.609 [2024-12-13 03:48:17.776824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4f40 00:37:16.609 [2024-12-13 03:48:17.777880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.609 [2024-12-13 03:48:17.777904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:16.609 [2024-12-13 03:48:17.787408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4f40 00:37:16.609 [2024-12-13 03:48:17.788572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.609 [2024-12-13 03:48:17.788596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:16.609 [2024-12-13 03:48:17.797786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4f40 00:37:16.609 [2024-12-13 03:48:17.798928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.609 [2024-12-13 03:48:17.798953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:16.609 [2024-12-13 03:48:17.808228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4f40 00:37:16.609 [2024-12-13 03:48:17.809325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.609 [2024-12-13 03:48:17.809350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:16.868 [2024-12-13 03:48:17.819057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4f40 00:37:16.868 [2024-12-13 03:48:17.820306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.868 [2024-12-13 03:48:17.820331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:16.868 [2024-12-13 03:48:17.829532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4f40 00:37:16.868 [2024-12-13 03:48:17.830676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.868 [2024-12-13 03:48:17.830701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:16.868 [2024-12-13 03:48:17.839958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4f40 00:37:16.868 [2024-12-13 03:48:17.841153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.868 [2024-12-13 03:48:17.841187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:16.868 [2024-12-13 03:48:17.850393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4f40 00:37:16.868 [2024-12-13 03:48:17.851555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.868 [2024-12-13 03:48:17.851583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:16.868 [2024-12-13 03:48:17.860892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4f40 00:37:16.868 [2024-12-13 03:48:17.862112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.868 [2024-12-13 03:48:17.862137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:16.869 [2024-12-13 03:48:17.870848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6cc8 00:37:16.869 [2024-12-13 03:48:17.871988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.869 [2024-12-13 03:48:17.872013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:37:16.869 [2024-12-13 03:48:17.881943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eaef0 00:37:16.869 [2024-12-13 03:48:17.883219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.869 [2024-12-13 03:48:17.883244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:37:16.869 [2024-12-13 03:48:17.891773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de8a8 00:37:16.869 [2024-12-13 03:48:17.892653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.869 [2024-12-13 03:48:17.892678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:16.869 [2024-12-13 03:48:17.902049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3060 00:37:16.869 [2024-12-13 03:48:17.902892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.869 [2024-12-13 03:48:17.902921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:16.869 [2024-12-13 03:48:17.912440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0bc0 00:37:16.869 [2024-12-13 03:48:17.913337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.869 [2024-12-13 03:48:17.913361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:16.869 [2024-12-13 03:48:17.923153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f3e60 00:37:16.869 [2024-12-13 03:48:17.923885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.869 [2024-12-13 03:48:17.923909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:37:16.869 [2024-12-13 03:48:17.934084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fcdd0 00:37:16.869 [2024-12-13 03:48:17.935189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.869 [2024-12-13 03:48:17.935214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:37:16.869 [2024-12-13 03:48:17.944659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0ff8 00:37:16.869 [2024-12-13 03:48:17.945831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.869 [2024-12-13 03:48:17.945855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:16.869 [2024-12-13 03:48:17.955061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f35f0 00:37:16.869 [2024-12-13 03:48:17.956192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.869 [2024-12-13 03:48:17.956219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:16.869 [2024-12-13 03:48:17.965470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3060 00:37:16.869 [2024-12-13 03:48:17.966584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.869 [2024-12-13 03:48:17.966608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:16.869 [2024-12-13 03:48:17.975855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ef270 00:37:16.869 [2024-12-13 03:48:17.977023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.869 [2024-12-13 03:48:17.977048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:16.869 [2024-12-13 03:48:17.986397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e9168 00:37:16.869 [2024-12-13 03:48:17.987525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.869 [2024-12-13 03:48:17.987550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:16.869 [2024-12-13 03:48:17.996795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f81e0 00:37:16.869 [2024-12-13 03:48:17.997937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.869 [2024-12-13 03:48:17.997962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:16.869 [2024-12-13 03:48:18.007207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e12d8 00:37:16.869 [2024-12-13 03:48:18.008334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.869 [2024-12-13 03:48:18.008358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:16.869 [2024-12-13 03:48:18.017591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc128 00:37:16.869 [2024-12-13 03:48:18.018700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.869 [2024-12-13 03:48:18.018724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:16.869 [2024-12-13 03:48:18.028004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6738 00:37:16.869 [2024-12-13 03:48:18.029182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.869 [2024-12-13 03:48:18.029209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:16.869 [2024-12-13 03:48:18.039750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3498 00:37:16.869 [2024-12-13 03:48:18.041410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.869 [2024-12-13 03:48:18.041434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:16.869 [2024-12-13 03:48:18.047153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7da8 00:37:16.869 [2024-12-13 03:48:18.047853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.869 [2024-12-13 03:48:18.047878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:37:16.869 [2024-12-13 03:48:18.057766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eff18 00:37:16.869 [2024-12-13 03:48:18.058515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.869 [2024-12-13 03:48:18.058539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:37:16.869 [2024-12-13 03:48:18.068029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1868 00:37:16.869 [2024-12-13 03:48:18.068756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:16.869 [2024-12-13 03:48:18.068781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:37:17.128 [2024-12-13 03:48:18.078082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3060 00:37:17.128 [2024-12-13 03:48:18.078807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:17.128 [2024-12-13 03:48:18.078831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:17.128 [2024-12-13 03:48:18.089201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fa7d8 00:37:17.128 24293.00 IOPS, 94.89 MiB/s [2024-12-13T02:48:18.337Z] [2024-12-13 03:48:18.090043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:17.128 [2024-12-13 03:48:18.090066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:37:17.128 00:37:17.128 Latency(us) 00:37:17.128 [2024-12-13T02:48:18.337Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:17.128 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:17.128 nvme0n1 : 2.00 24312.80 94.97 0.00 0.00 5259.97 2090.91 14605.17 00:37:17.128 [2024-12-13T02:48:18.337Z] =================================================================================================================== 00:37:17.128 [2024-12-13T02:48:18.337Z] Total : 24312.80 94.97 0.00 0.00 5259.97 2090.91 14605.17 00:37:17.128 { 00:37:17.128 "results": [ 00:37:17.128 { 00:37:17.128 "job": "nvme0n1", 00:37:17.128 "core_mask": "0x2", 00:37:17.128 "workload": "randwrite", 00:37:17.128 "status": "finished", 00:37:17.128 "queue_depth": 128, 00:37:17.128 "io_size": 4096, 00:37:17.128 "runtime": 2.003636, 00:37:17.128 "iops": 24312.799330816575, 00:37:17.128 "mibps": 94.97187238600225, 00:37:17.128 "io_failed": 0, 00:37:17.128 "io_timeout": 0, 00:37:17.128 "avg_latency_us": 5259.9723118610655, 00:37:17.128 "min_latency_us": 2090.9104761904764, 00:37:17.128 "max_latency_us": 14605.165714285715 00:37:17.128 } 00:37:17.128 ], 00:37:17.128 "core_count": 1 00:37:17.128 } 00:37:17.128 03:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:17.128 03:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:17.128 03:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:17.128 | .driver_specific 00:37:17.128 | .nvme_error 00:37:17.128 | .status_code 00:37:17.128 | .command_transient_transport_error' 00:37:17.128 03:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:17.128 03:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 191 > 0 )) 00:37:17.128 03:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2906020 00:37:17.128 03:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2906020 ']' 00:37:17.128 03:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2906020 00:37:17.128 03:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:17.128 03:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:17.128 03:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2906020 00:37:17.387 03:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:17.387 03:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:17.387 03:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2906020' 00:37:17.387 killing process with pid 2906020 00:37:17.387 03:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2906020 00:37:17.387 Received shutdown signal, test time was about 2.000000 seconds 00:37:17.387 00:37:17.387 Latency(us) 00:37:17.387 [2024-12-13T02:48:18.596Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:17.387 [2024-12-13T02:48:18.596Z] =================================================================================================================== 00:37:17.387 [2024-12-13T02:48:18.596Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:17.387 03:48:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2906020 00:37:18.324 03:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:37:18.324 03:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:18.324 03:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:37:18.324 03:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:37:18.324 03:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:37:18.324 03:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:37:18.324 03:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2906908 00:37:18.324 03:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2906908 /var/tmp/bperf.sock 00:37:18.324 03:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2906908 ']' 00:37:18.324 03:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:18.324 03:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:18.324 03:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:18.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:18.324 03:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:18.324 03:48:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:18.324 [2024-12-13 03:48:19.303114] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:37:18.324 [2024-12-13 03:48:19.303202] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2906908 ] 00:37:18.324 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:18.324 Zero copy mechanism will not be used. 00:37:18.324 [2024-12-13 03:48:19.413055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:18.324 [2024-12-13 03:48:19.524152] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:19.260 03:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:19.260 03:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:19.260 03:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:19.260 03:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:19.260 03:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:19.260 03:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.260 03:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:19.260 03:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:19.260 03:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:19.260 03:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:19.518 nvme0n1 00:37:19.518 03:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:19.518 03:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.518 03:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:19.519 03:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:19.519 03:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:19.519 03:48:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:19.778 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:19.778 Zero copy mechanism will not be used. 00:37:19.778 Running I/O for 2 seconds... 00:37:19.778 [2024-12-13 03:48:20.810448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:19.778 [2024-12-13 03:48:20.810546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.778 [2024-12-13 03:48:20.810584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:19.778 [2024-12-13 03:48:20.816639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:19.778 [2024-12-13 03:48:20.816737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.778 [2024-12-13 03:48:20.816768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:19.778 [2024-12-13 03:48:20.823246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:19.778 [2024-12-13 03:48:20.823387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.778 [2024-12-13 03:48:20.823416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:19.778 [2024-12-13 03:48:20.830695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:19.778 [2024-12-13 03:48:20.830825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.778 [2024-12-13 03:48:20.830852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:19.778 [2024-12-13 03:48:20.837621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:19.778 [2024-12-13 03:48:20.837715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.778 [2024-12-13 03:48:20.837742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:19.778 [2024-12-13 03:48:20.843754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:19.778 [2024-12-13 03:48:20.843869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.778 [2024-12-13 03:48:20.843895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:19.778 [2024-12-13 03:48:20.850689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:19.778 [2024-12-13 03:48:20.850760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.778 [2024-12-13 03:48:20.850786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:19.778 [2024-12-13 03:48:20.858271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:19.778 [2024-12-13 03:48:20.858435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.778 [2024-12-13 03:48:20.858461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:19.778 [2024-12-13 03:48:20.866382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:19.778 [2024-12-13 03:48:20.866564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.779 [2024-12-13 03:48:20.866590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:19.779 [2024-12-13 03:48:20.874235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:19.779 [2024-12-13 03:48:20.874391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.779 [2024-12-13 03:48:20.874421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:19.779 [2024-12-13 03:48:20.881505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:19.779 [2024-12-13 03:48:20.881593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.779 [2024-12-13 03:48:20.881619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:19.779 [2024-12-13 03:48:20.888783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:19.779 [2024-12-13 03:48:20.888906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.779 [2024-12-13 03:48:20.888937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:19.779 [2024-12-13 03:48:20.895976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:19.779 [2024-12-13 03:48:20.896046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.779 [2024-12-13 03:48:20.896072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:19.779 [2024-12-13 03:48:20.901509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:19.779 [2024-12-13 03:48:20.901588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.779 [2024-12-13 03:48:20.901614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:19.779 [2024-12-13 03:48:20.907054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:19.779 [2024-12-13 03:48:20.907135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.779 [2024-12-13 03:48:20.907168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:19.779 [2024-12-13 03:48:20.912409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:19.779 [2024-12-13 03:48:20.912489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.779 [2024-12-13 03:48:20.912514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:19.779 [2024-12-13 03:48:20.917790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:19.779 [2024-12-13 03:48:20.917861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.779 [2024-12-13 03:48:20.917886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:19.779 [2024-12-13 03:48:20.922979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:19.779 [2024-12-13 03:48:20.923061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.779 [2024-12-13 03:48:20.923086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:19.779 [2024-12-13 03:48:20.928256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:19.779 [2024-12-13 03:48:20.928327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.779 [2024-12-13 03:48:20.928353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:19.779 [2024-12-13 03:48:20.933497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:19.779 [2024-12-13 03:48:20.933566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.779 [2024-12-13 03:48:20.933590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:19.779 [2024-12-13 03:48:20.938853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:19.779 [2024-12-13 03:48:20.938936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.779 [2024-12-13 03:48:20.938962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:19.779 [2024-12-13 03:48:20.944101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:19.779 [2024-12-13 03:48:20.944166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.779 [2024-12-13 03:48:20.944191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:19.779 [2024-12-13 03:48:20.949305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:19.779 [2024-12-13 03:48:20.949374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.779 [2024-12-13 03:48:20.949399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:19.779 [2024-12-13 03:48:20.954513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:19.779 [2024-12-13 03:48:20.954584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.779 [2024-12-13 03:48:20.954609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:19.779 [2024-12-13 03:48:20.959729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:19.779 [2024-12-13 03:48:20.959806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.779 [2024-12-13 03:48:20.959831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:19.779 [2024-12-13 03:48:20.965049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:19.779 [2024-12-13 03:48:20.965136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.779 [2024-12-13 03:48:20.965172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:19.779 [2024-12-13 03:48:20.970360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:19.779 [2024-12-13 03:48:20.970426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.779 [2024-12-13 03:48:20.970455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:19.779 [2024-12-13 03:48:20.975655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:19.779 [2024-12-13 03:48:20.975723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.779 [2024-12-13 03:48:20.975748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:19.779 [2024-12-13 03:48:20.980974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:19.779 [2024-12-13 03:48:20.981052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.779 [2024-12-13 03:48:20.981078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.039 [2024-12-13 03:48:20.986618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.039 [2024-12-13 03:48:20.986687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.039 [2024-12-13 03:48:20.986713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.039 [2024-12-13 03:48:20.991999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.039 [2024-12-13 03:48:20.992078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.039 [2024-12-13 03:48:20.992103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.039 [2024-12-13 03:48:20.997361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.039 [2024-12-13 03:48:20.997431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.039 [2024-12-13 03:48:20.997456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.039 [2024-12-13 03:48:21.002713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.039 [2024-12-13 03:48:21.002792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.039 [2024-12-13 03:48:21.002818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.039 [2024-12-13 03:48:21.008095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.039 [2024-12-13 03:48:21.008160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.039 [2024-12-13 03:48:21.008185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.039 [2024-12-13 03:48:21.013276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.039 [2024-12-13 03:48:21.013361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.039 [2024-12-13 03:48:21.013385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.039 [2024-12-13 03:48:21.018679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.039 [2024-12-13 03:48:21.018754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.039 [2024-12-13 03:48:21.018779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.039 [2024-12-13 03:48:21.024081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.039 [2024-12-13 03:48:21.024160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.039 [2024-12-13 03:48:21.024185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.039 [2024-12-13 03:48:21.029437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.039 [2024-12-13 03:48:21.029506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.039 [2024-12-13 03:48:21.029531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.039 [2024-12-13 03:48:21.034771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.039 [2024-12-13 03:48:21.034849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.039 [2024-12-13 03:48:21.034873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.039 [2024-12-13 03:48:21.039997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.039 [2024-12-13 03:48:21.040064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.039 [2024-12-13 03:48:21.040089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.039 [2024-12-13 03:48:21.045161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.039 [2024-12-13 03:48:21.045241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.039 [2024-12-13 03:48:21.045266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.039 [2024-12-13 03:48:21.050435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.039 [2024-12-13 03:48:21.050502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.039 [2024-12-13 03:48:21.050527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.039 [2024-12-13 03:48:21.055628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.039 [2024-12-13 03:48:21.055692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.039 [2024-12-13 03:48:21.055717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.039 [2024-12-13 03:48:21.060874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.039 [2024-12-13 03:48:21.060949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.040 [2024-12-13 03:48:21.060974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.040 [2024-12-13 03:48:21.066236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.040 [2024-12-13 03:48:21.066304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.040 [2024-12-13 03:48:21.066330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.040 [2024-12-13 03:48:21.071528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.040 [2024-12-13 03:48:21.071637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.040 [2024-12-13 03:48:21.071662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.040 [2024-12-13 03:48:21.076763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.040 [2024-12-13 03:48:21.076844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.040 [2024-12-13 03:48:21.076869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.040 [2024-12-13 03:48:21.082041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.040 [2024-12-13 03:48:21.082110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.040 [2024-12-13 03:48:21.082135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.040 [2024-12-13 03:48:21.087465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.040 [2024-12-13 03:48:21.087591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.040 [2024-12-13 03:48:21.087617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.040 [2024-12-13 03:48:21.092747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.040 [2024-12-13 03:48:21.092814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.040 [2024-12-13 03:48:21.092839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.040 [2024-12-13 03:48:21.097962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.040 [2024-12-13 03:48:21.098029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.040 [2024-12-13 03:48:21.098053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.040 [2024-12-13 03:48:21.103423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.040 [2024-12-13 03:48:21.103505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.040 [2024-12-13 03:48:21.103530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.040 [2024-12-13 03:48:21.108649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.040 [2024-12-13 03:48:21.108727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.040 [2024-12-13 03:48:21.108752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.040 [2024-12-13 03:48:21.113845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.040 [2024-12-13 03:48:21.113911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.040 [2024-12-13 03:48:21.113942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.040 [2024-12-13 03:48:21.119044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.040 [2024-12-13 03:48:21.119122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.040 [2024-12-13 03:48:21.119147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.040 [2024-12-13 03:48:21.124278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.040 [2024-12-13 03:48:21.124356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.040 [2024-12-13 03:48:21.124381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.040 [2024-12-13 03:48:21.129571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.040 [2024-12-13 03:48:21.129635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.040 [2024-12-13 03:48:21.129660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.040 [2024-12-13 03:48:21.134714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.040 [2024-12-13 03:48:21.134781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.040 [2024-12-13 03:48:21.134805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.040 [2024-12-13 03:48:21.139955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.040 [2024-12-13 03:48:21.140034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.040 [2024-12-13 03:48:21.140059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.040 [2024-12-13 03:48:21.145269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.040 [2024-12-13 03:48:21.145348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.040 [2024-12-13 03:48:21.145372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.040 [2024-12-13 03:48:21.150456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.040 [2024-12-13 03:48:21.150529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.040 [2024-12-13 03:48:21.150554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.040 [2024-12-13 03:48:21.155683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.040 [2024-12-13 03:48:21.155764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.040 [2024-12-13 03:48:21.155789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.040 [2024-12-13 03:48:21.160900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.040 [2024-12-13 03:48:21.160976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.040 [2024-12-13 03:48:21.161000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.040 [2024-12-13 03:48:21.165934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.040 [2024-12-13 03:48:21.166014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.040 [2024-12-13 03:48:21.166038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.040 [2024-12-13 03:48:21.171006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.040 [2024-12-13 03:48:21.171086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.040 [2024-12-13 03:48:21.171110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.040 [2024-12-13 03:48:21.176315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.040 [2024-12-13 03:48:21.176387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.040 [2024-12-13 03:48:21.176412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.040 [2024-12-13 03:48:21.181416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.040 [2024-12-13 03:48:21.181481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.040 [2024-12-13 03:48:21.181505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.040 [2024-12-13 03:48:21.186609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.040 [2024-12-13 03:48:21.186692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.040 [2024-12-13 03:48:21.186717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.040 [2024-12-13 03:48:21.191865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.040 [2024-12-13 03:48:21.191938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.040 [2024-12-13 03:48:21.191963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.040 [2024-12-13 03:48:21.197142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.040 [2024-12-13 03:48:21.197219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.040 [2024-12-13 03:48:21.197248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.040 [2024-12-13 03:48:21.202479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.040 [2024-12-13 03:48:21.202549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.040 [2024-12-13 03:48:21.202573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.040 [2024-12-13 03:48:21.207800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.041 [2024-12-13 03:48:21.207867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.041 [2024-12-13 03:48:21.207892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.041 [2024-12-13 03:48:21.213051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.041 [2024-12-13 03:48:21.213135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.041 [2024-12-13 03:48:21.213160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.041 [2024-12-13 03:48:21.218292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.041 [2024-12-13 03:48:21.218357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.041 [2024-12-13 03:48:21.218381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.041 [2024-12-13 03:48:21.223464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.041 [2024-12-13 03:48:21.223533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.041 [2024-12-13 03:48:21.223557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.041 [2024-12-13 03:48:21.228862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.041 [2024-12-13 03:48:21.228940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.041 [2024-12-13 03:48:21.228981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.041 [2024-12-13 03:48:21.234100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.041 [2024-12-13 03:48:21.234188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.041 [2024-12-13 03:48:21.234213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.041 [2024-12-13 03:48:21.239506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.041 [2024-12-13 03:48:21.239580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.041 [2024-12-13 03:48:21.239606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.041 [2024-12-13 03:48:21.244951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.041 [2024-12-13 03:48:21.245021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.041 [2024-12-13 03:48:21.245053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.301 [2024-12-13 03:48:21.251172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.301 [2024-12-13 03:48:21.251243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.301 [2024-12-13 03:48:21.251268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.301 [2024-12-13 03:48:21.257693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.301 [2024-12-13 03:48:21.257773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.301 [2024-12-13 03:48:21.257798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.301 [2024-12-13 03:48:21.263408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.301 [2024-12-13 03:48:21.263474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.301 [2024-12-13 03:48:21.263499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.301 [2024-12-13 03:48:21.269009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.301 [2024-12-13 03:48:21.269099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.301 [2024-12-13 03:48:21.269124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.301 [2024-12-13 03:48:21.274301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.301 [2024-12-13 03:48:21.274370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.301 [2024-12-13 03:48:21.274394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.301 [2024-12-13 03:48:21.279575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.301 [2024-12-13 03:48:21.279641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.301 [2024-12-13 03:48:21.279666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.301 [2024-12-13 03:48:21.285160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.301 [2024-12-13 03:48:21.285262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.301 [2024-12-13 03:48:21.285287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.301 [2024-12-13 03:48:21.290520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.301 [2024-12-13 03:48:21.290612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.301 [2024-12-13 03:48:21.290641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.301 [2024-12-13 03:48:21.296840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.301 [2024-12-13 03:48:21.296907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.301 [2024-12-13 03:48:21.296938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.301 [2024-12-13 03:48:21.302611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.301 [2024-12-13 03:48:21.302679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.301 [2024-12-13 03:48:21.302703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.301 [2024-12-13 03:48:21.308259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.301 [2024-12-13 03:48:21.308336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.301 [2024-12-13 03:48:21.308361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.301 [2024-12-13 03:48:21.314973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.301 [2024-12-13 03:48:21.315042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.301 [2024-12-13 03:48:21.315066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.301 [2024-12-13 03:48:21.321416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.301 [2024-12-13 03:48:21.321514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.301 [2024-12-13 03:48:21.321540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.301 [2024-12-13 03:48:21.326958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.301 [2024-12-13 03:48:21.327093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.301 [2024-12-13 03:48:21.327118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.301 [2024-12-13 03:48:21.332404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.301 [2024-12-13 03:48:21.332484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.301 [2024-12-13 03:48:21.332509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.301 [2024-12-13 03:48:21.337889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.301 [2024-12-13 03:48:21.337966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.301 [2024-12-13 03:48:21.337992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.301 [2024-12-13 03:48:21.343297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.301 [2024-12-13 03:48:21.343364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.301 [2024-12-13 03:48:21.343389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.301 [2024-12-13 03:48:21.348656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.301 [2024-12-13 03:48:21.348725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.302 [2024-12-13 03:48:21.348749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.302 [2024-12-13 03:48:21.354021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.302 [2024-12-13 03:48:21.354116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.302 [2024-12-13 03:48:21.354141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.302 [2024-12-13 03:48:21.359314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.302 [2024-12-13 03:48:21.359386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.302 [2024-12-13 03:48:21.359410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.302 [2024-12-13 03:48:21.364666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.302 [2024-12-13 03:48:21.364732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.302 [2024-12-13 03:48:21.364756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.302 [2024-12-13 03:48:21.370060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.302 [2024-12-13 03:48:21.370139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.302 [2024-12-13 03:48:21.370164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.302 [2024-12-13 03:48:21.375370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.302 [2024-12-13 03:48:21.375437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.302 [2024-12-13 03:48:21.375461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.302 [2024-12-13 03:48:21.380534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.302 [2024-12-13 03:48:21.380604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.302 [2024-12-13 03:48:21.380629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.302 [2024-12-13 03:48:21.385693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.302 [2024-12-13 03:48:21.385773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.302 [2024-12-13 03:48:21.385802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.302 [2024-12-13 03:48:21.390876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.302 [2024-12-13 03:48:21.390958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.302 [2024-12-13 03:48:21.390984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.302 [2024-12-13 03:48:21.396036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.302 [2024-12-13 03:48:21.396100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.302 [2024-12-13 03:48:21.396124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.302 [2024-12-13 03:48:21.401170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.302 [2024-12-13 03:48:21.401238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.302 [2024-12-13 03:48:21.401263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.302 [2024-12-13 03:48:21.406346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.302 [2024-12-13 03:48:21.406424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.302 [2024-12-13 03:48:21.406449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.302 [2024-12-13 03:48:21.411565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.302 [2024-12-13 03:48:21.411667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.302 [2024-12-13 03:48:21.411691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.302 [2024-12-13 03:48:21.416864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.302 [2024-12-13 03:48:21.416952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.302 [2024-12-13 03:48:21.416976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.302 [2024-12-13 03:48:21.422695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.302 [2024-12-13 03:48:21.422785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.302 [2024-12-13 03:48:21.422809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.302 [2024-12-13 03:48:21.428502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.302 [2024-12-13 03:48:21.428576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.302 [2024-12-13 03:48:21.428601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.302 [2024-12-13 03:48:21.433879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.302 [2024-12-13 03:48:21.433959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.302 [2024-12-13 03:48:21.433983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.302 [2024-12-13 03:48:21.439164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.302 [2024-12-13 03:48:21.439232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.302 [2024-12-13 03:48:21.439256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.302 [2024-12-13 03:48:21.444455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.302 [2024-12-13 03:48:21.444553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.302 [2024-12-13 03:48:21.444577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.302 [2024-12-13 03:48:21.449861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.302 [2024-12-13 03:48:21.449933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.302 [2024-12-13 03:48:21.449958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.302 [2024-12-13 03:48:21.455052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.302 [2024-12-13 03:48:21.455133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.302 [2024-12-13 03:48:21.455170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.302 [2024-12-13 03:48:21.460318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.302 [2024-12-13 03:48:21.460403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.302 [2024-12-13 03:48:21.460428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.302 [2024-12-13 03:48:21.465789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.302 [2024-12-13 03:48:21.465936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.302 [2024-12-13 03:48:21.465961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.302 [2024-12-13 03:48:21.471668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.302 [2024-12-13 03:48:21.471747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.302 [2024-12-13 03:48:21.471772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.302 [2024-12-13 03:48:21.477682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.302 [2024-12-13 03:48:21.477763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.302 [2024-12-13 03:48:21.477793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.302 [2024-12-13 03:48:21.483914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.302 [2024-12-13 03:48:21.483998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.302 [2024-12-13 03:48:21.484022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.302 [2024-12-13 03:48:21.490153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.302 [2024-12-13 03:48:21.490220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.302 [2024-12-13 03:48:21.490244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.302 [2024-12-13 03:48:21.495829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.302 [2024-12-13 03:48:21.495896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.302 [2024-12-13 03:48:21.495926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.302 [2024-12-13 03:48:21.501139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.303 [2024-12-13 03:48:21.501217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.303 [2024-12-13 03:48:21.501243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.303 [2024-12-13 03:48:21.506534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.303 [2024-12-13 03:48:21.506605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.303 [2024-12-13 03:48:21.506631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.563 [2024-12-13 03:48:21.511889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.563 [2024-12-13 03:48:21.511964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.563 [2024-12-13 03:48:21.512004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.563 [2024-12-13 03:48:21.517214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.563 [2024-12-13 03:48:21.517282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.563 [2024-12-13 03:48:21.517307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.563 [2024-12-13 03:48:21.522365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.563 [2024-12-13 03:48:21.522430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.563 [2024-12-13 03:48:21.522455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.563 [2024-12-13 03:48:21.527591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.563 [2024-12-13 03:48:21.527674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.563 [2024-12-13 03:48:21.527698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.563 [2024-12-13 03:48:21.532805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.563 [2024-12-13 03:48:21.532872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.563 [2024-12-13 03:48:21.532896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.563 [2024-12-13 03:48:21.537947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.563 [2024-12-13 03:48:21.538015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.563 [2024-12-13 03:48:21.538040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.563 [2024-12-13 03:48:21.543121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.563 [2024-12-13 03:48:21.543199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.563 [2024-12-13 03:48:21.543224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.563 [2024-12-13 03:48:21.548346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.563 [2024-12-13 03:48:21.548426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.563 [2024-12-13 03:48:21.548450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.563 [2024-12-13 03:48:21.553483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.563 [2024-12-13 03:48:21.553549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.563 [2024-12-13 03:48:21.553574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.563 [2024-12-13 03:48:21.558606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.563 [2024-12-13 03:48:21.558669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.563 [2024-12-13 03:48:21.558693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.563 [2024-12-13 03:48:21.563880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.563 [2024-12-13 03:48:21.564021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.563 [2024-12-13 03:48:21.564046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.563 [2024-12-13 03:48:21.569888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.563 [2024-12-13 03:48:21.570059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.563 [2024-12-13 03:48:21.570088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.563 [2024-12-13 03:48:21.576413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.563 [2024-12-13 03:48:21.576545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.563 [2024-12-13 03:48:21.576570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.563 [2024-12-13 03:48:21.582238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.563 [2024-12-13 03:48:21.582376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.563 [2024-12-13 03:48:21.582401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.563 [2024-12-13 03:48:21.588584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.563 [2024-12-13 03:48:21.588695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.563 [2024-12-13 03:48:21.588720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.563 [2024-12-13 03:48:21.594878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.563 [2024-12-13 03:48:21.595009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.563 [2024-12-13 03:48:21.595034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.563 [2024-12-13 03:48:21.600268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.563 [2024-12-13 03:48:21.600379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.563 [2024-12-13 03:48:21.600413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.563 [2024-12-13 03:48:21.605993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.563 [2024-12-13 03:48:21.606096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.563 [2024-12-13 03:48:21.606121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.563 [2024-12-13 03:48:21.611702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.563 [2024-12-13 03:48:21.611776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.563 [2024-12-13 03:48:21.611802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.563 [2024-12-13 03:48:21.617546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.563 [2024-12-13 03:48:21.617652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.563 [2024-12-13 03:48:21.617677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.563 [2024-12-13 03:48:21.623284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.563 [2024-12-13 03:48:21.623388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.563 [2024-12-13 03:48:21.623413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.563 [2024-12-13 03:48:21.628926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.563 [2024-12-13 03:48:21.629039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.563 [2024-12-13 03:48:21.629063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.563 [2024-12-13 03:48:21.634129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.563 [2024-12-13 03:48:21.634196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.563 [2024-12-13 03:48:21.634221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.563 [2024-12-13 03:48:21.639224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.563 [2024-12-13 03:48:21.639302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.563 [2024-12-13 03:48:21.639326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.564 [2024-12-13 03:48:21.644335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.564 [2024-12-13 03:48:21.644408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.564 [2024-12-13 03:48:21.644433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.564 [2024-12-13 03:48:21.649984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.564 [2024-12-13 03:48:21.650082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.564 [2024-12-13 03:48:21.650107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.564 [2024-12-13 03:48:21.656591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.564 [2024-12-13 03:48:21.656789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.564 [2024-12-13 03:48:21.656815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.564 [2024-12-13 03:48:21.663043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.564 [2024-12-13 03:48:21.663158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.564 [2024-12-13 03:48:21.663183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.564 [2024-12-13 03:48:21.669117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.564 [2024-12-13 03:48:21.669232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.564 [2024-12-13 03:48:21.669257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.564 [2024-12-13 03:48:21.674998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.564 [2024-12-13 03:48:21.675134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.564 [2024-12-13 03:48:21.675163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.564 [2024-12-13 03:48:21.680791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.564 [2024-12-13 03:48:21.680893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.564 [2024-12-13 03:48:21.680926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.564 [2024-12-13 03:48:21.686444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.564 [2024-12-13 03:48:21.686516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.564 [2024-12-13 03:48:21.686558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.564 [2024-12-13 03:48:21.692434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.564 [2024-12-13 03:48:21.692604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.564 [2024-12-13 03:48:21.692629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.564 [2024-12-13 03:48:21.698897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.564 [2024-12-13 03:48:21.699050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.564 [2024-12-13 03:48:21.699075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.564 [2024-12-13 03:48:21.705654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.564 [2024-12-13 03:48:21.705803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.564 [2024-12-13 03:48:21.705827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.564 [2024-12-13 03:48:21.712174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.564 [2024-12-13 03:48:21.712304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.564 [2024-12-13 03:48:21.712329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.564 [2024-12-13 03:48:21.718822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.564 [2024-12-13 03:48:21.718980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.564 [2024-12-13 03:48:21.719004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.564 [2024-12-13 03:48:21.725691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.564 [2024-12-13 03:48:21.725861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.564 [2024-12-13 03:48:21.725885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.564 [2024-12-13 03:48:21.732128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.564 [2024-12-13 03:48:21.732271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.564 [2024-12-13 03:48:21.732296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.564 [2024-12-13 03:48:21.738625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.564 [2024-12-13 03:48:21.738763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.564 [2024-12-13 03:48:21.738788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.564 [2024-12-13 03:48:21.745454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.564 [2024-12-13 03:48:21.745523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.564 [2024-12-13 03:48:21.745549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.564 [2024-12-13 03:48:21.751393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.564 [2024-12-13 03:48:21.751481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.564 [2024-12-13 03:48:21.751505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.564 [2024-12-13 03:48:21.757069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.564 [2024-12-13 03:48:21.757168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.564 [2024-12-13 03:48:21.757192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.564 [2024-12-13 03:48:21.762625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.564 [2024-12-13 03:48:21.762693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.564 [2024-12-13 03:48:21.762718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.564 [2024-12-13 03:48:21.768037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.564 [2024-12-13 03:48:21.768128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.564 [2024-12-13 03:48:21.768152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.824 [2024-12-13 03:48:21.774233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.824 [2024-12-13 03:48:21.774407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.824 [2024-12-13 03:48:21.774432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.824 [2024-12-13 03:48:21.780807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.824 [2024-12-13 03:48:21.780978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.824 [2024-12-13 03:48:21.781003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.824 [2024-12-13 03:48:21.787518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.824 [2024-12-13 03:48:21.787701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.824 [2024-12-13 03:48:21.787725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.824 [2024-12-13 03:48:21.793986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.825 [2024-12-13 03:48:21.794115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.825 [2024-12-13 03:48:21.794140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.825 [2024-12-13 03:48:21.800712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.825 [2024-12-13 03:48:21.800890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.825 [2024-12-13 03:48:21.800916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.825 [2024-12-13 03:48:21.807711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.825 [2024-12-13 03:48:21.809058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.825 [2024-12-13 03:48:21.809085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.825 5469.00 IOPS, 683.62 MiB/s [2024-12-13T02:48:22.034Z] [2024-12-13 03:48:21.814988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.825 [2024-12-13 03:48:21.815127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.825 [2024-12-13 03:48:21.815152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.825 [2024-12-13 03:48:21.820793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.825 [2024-12-13 03:48:21.820908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.825 [2024-12-13 03:48:21.820940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.825 [2024-12-13 03:48:21.826206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.825 [2024-12-13 03:48:21.826315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.825 [2024-12-13 03:48:21.826342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.825 [2024-12-13 03:48:21.832217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.825 [2024-12-13 03:48:21.832342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.825 [2024-12-13 03:48:21.832368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.825 [2024-12-13 03:48:21.837565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.825 [2024-12-13 03:48:21.837648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.825 [2024-12-13 03:48:21.837673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.825 [2024-12-13 03:48:21.842864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.825 [2024-12-13 03:48:21.842992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.825 [2024-12-13 03:48:21.843017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.825 [2024-12-13 03:48:21.848718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.825 [2024-12-13 03:48:21.848798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.825 [2024-12-13 03:48:21.848823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.825 [2024-12-13 03:48:21.855024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.825 [2024-12-13 03:48:21.855162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.825 [2024-12-13 03:48:21.855186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.825 [2024-12-13 03:48:21.862018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.825 [2024-12-13 03:48:21.862130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.825 [2024-12-13 03:48:21.862156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.825 [2024-12-13 03:48:21.868828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.825 [2024-12-13 03:48:21.868945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.825 [2024-12-13 03:48:21.868971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.825 [2024-12-13 03:48:21.875788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.825 [2024-12-13 03:48:21.875900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.825 [2024-12-13 03:48:21.875933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.825 [2024-12-13 03:48:21.882624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.825 [2024-12-13 03:48:21.882725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.825 [2024-12-13 03:48:21.882749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.825 [2024-12-13 03:48:21.890314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.825 [2024-12-13 03:48:21.890449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.825 [2024-12-13 03:48:21.890474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.825 [2024-12-13 03:48:21.897055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.825 [2024-12-13 03:48:21.897191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.825 [2024-12-13 03:48:21.897216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.825 [2024-12-13 03:48:21.903255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.825 [2024-12-13 03:48:21.903384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.825 [2024-12-13 03:48:21.903408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.825 [2024-12-13 03:48:21.909120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.825 [2024-12-13 03:48:21.909227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.825 [2024-12-13 03:48:21.909251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.825 [2024-12-13 03:48:21.915057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.825 [2024-12-13 03:48:21.915182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.825 [2024-12-13 03:48:21.915207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.825 [2024-12-13 03:48:21.921313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.825 [2024-12-13 03:48:21.921455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.825 [2024-12-13 03:48:21.921479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.825 [2024-12-13 03:48:21.927947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.825 [2024-12-13 03:48:21.928118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.825 [2024-12-13 03:48:21.928143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.825 [2024-12-13 03:48:21.935082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.825 [2024-12-13 03:48:21.935191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.825 [2024-12-13 03:48:21.935216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.825 [2024-12-13 03:48:21.942053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.825 [2024-12-13 03:48:21.942157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.825 [2024-12-13 03:48:21.942187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.825 [2024-12-13 03:48:21.948856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.825 [2024-12-13 03:48:21.948988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.825 [2024-12-13 03:48:21.949013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.825 [2024-12-13 03:48:21.955383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.825 [2024-12-13 03:48:21.955459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.825 [2024-12-13 03:48:21.955484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.825 [2024-12-13 03:48:21.961343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.826 [2024-12-13 03:48:21.961424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.826 [2024-12-13 03:48:21.961449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.826 [2024-12-13 03:48:21.967388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.826 [2024-12-13 03:48:21.967467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.826 [2024-12-13 03:48:21.967492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.826 [2024-12-13 03:48:21.973142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.826 [2024-12-13 03:48:21.973231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.826 [2024-12-13 03:48:21.973256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.826 [2024-12-13 03:48:21.979101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.826 [2024-12-13 03:48:21.979181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.826 [2024-12-13 03:48:21.979206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.826 [2024-12-13 03:48:21.985353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.826 [2024-12-13 03:48:21.985418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.826 [2024-12-13 03:48:21.985444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.826 [2024-12-13 03:48:21.991642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.826 [2024-12-13 03:48:21.991723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.826 [2024-12-13 03:48:21.991749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.826 [2024-12-13 03:48:21.997797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.826 [2024-12-13 03:48:21.997865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.826 [2024-12-13 03:48:21.997899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.826 [2024-12-13 03:48:22.004109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.826 [2024-12-13 03:48:22.004200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.826 [2024-12-13 03:48:22.004225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:20.826 [2024-12-13 03:48:22.010092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.826 [2024-12-13 03:48:22.010159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.826 [2024-12-13 03:48:22.010185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:20.826 [2024-12-13 03:48:22.016055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.826 [2024-12-13 03:48:22.016134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.826 [2024-12-13 03:48:22.016160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:20.826 [2024-12-13 03:48:22.022677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.826 [2024-12-13 03:48:22.022749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.826 [2024-12-13 03:48:22.022774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:20.826 [2024-12-13 03:48:22.028703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:20.826 [2024-12-13 03:48:22.028781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.826 [2024-12-13 03:48:22.028807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.086 [2024-12-13 03:48:22.034763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.086 [2024-12-13 03:48:22.034868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.086 [2024-12-13 03:48:22.034893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.086 [2024-12-13 03:48:22.040864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.086 [2024-12-13 03:48:22.040936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.086 [2024-12-13 03:48:22.040961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.086 [2024-12-13 03:48:22.046834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.086 [2024-12-13 03:48:22.046902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.086 [2024-12-13 03:48:22.046938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:21.086 [2024-12-13 03:48:22.052559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.086 [2024-12-13 03:48:22.052637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.086 [2024-12-13 03:48:22.052664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.086 [2024-12-13 03:48:22.057878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.086 [2024-12-13 03:48:22.057960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.086 [2024-12-13 03:48:22.057986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.086 [2024-12-13 03:48:22.064121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.086 [2024-12-13 03:48:22.064192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.086 [2024-12-13 03:48:22.064218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.086 [2024-12-13 03:48:22.070209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.086 [2024-12-13 03:48:22.070359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.086 [2024-12-13 03:48:22.070384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:21.086 [2024-12-13 03:48:22.076375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.086 [2024-12-13 03:48:22.076447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.086 [2024-12-13 03:48:22.076473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.086 [2024-12-13 03:48:22.082538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.086 [2024-12-13 03:48:22.082621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.087 [2024-12-13 03:48:22.082647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.087 [2024-12-13 03:48:22.088494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.087 [2024-12-13 03:48:22.088568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.087 [2024-12-13 03:48:22.088594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.087 [2024-12-13 03:48:22.094038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.087 [2024-12-13 03:48:22.094137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.087 [2024-12-13 03:48:22.094164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:21.087 [2024-12-13 03:48:22.100043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.087 [2024-12-13 03:48:22.100189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.087 [2024-12-13 03:48:22.100214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.087 [2024-12-13 03:48:22.105764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.087 [2024-12-13 03:48:22.105856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.087 [2024-12-13 03:48:22.105880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.087 [2024-12-13 03:48:22.111266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.087 [2024-12-13 03:48:22.111333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.087 [2024-12-13 03:48:22.111358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.087 [2024-12-13 03:48:22.116342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.087 [2024-12-13 03:48:22.116472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.087 [2024-12-13 03:48:22.116497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:21.087 [2024-12-13 03:48:22.121387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.087 [2024-12-13 03:48:22.121451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.087 [2024-12-13 03:48:22.121478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.087 [2024-12-13 03:48:22.126382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.087 [2024-12-13 03:48:22.126476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.087 [2024-12-13 03:48:22.126501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.087 [2024-12-13 03:48:22.131472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.087 [2024-12-13 03:48:22.131539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.087 [2024-12-13 03:48:22.131564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.087 [2024-12-13 03:48:22.136599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.087 [2024-12-13 03:48:22.136677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.087 [2024-12-13 03:48:22.136701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:21.087 [2024-12-13 03:48:22.141625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.087 [2024-12-13 03:48:22.141705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.087 [2024-12-13 03:48:22.141734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.087 [2024-12-13 03:48:22.146718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.087 [2024-12-13 03:48:22.146810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.087 [2024-12-13 03:48:22.146834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.087 [2024-12-13 03:48:22.151975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.087 [2024-12-13 03:48:22.152066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.087 [2024-12-13 03:48:22.152090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.087 [2024-12-13 03:48:22.157375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.087 [2024-12-13 03:48:22.157488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.087 [2024-12-13 03:48:22.157512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:21.087 [2024-12-13 03:48:22.162778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.087 [2024-12-13 03:48:22.162841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.087 [2024-12-13 03:48:22.162865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.087 [2024-12-13 03:48:22.168442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.087 [2024-12-13 03:48:22.168511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.087 [2024-12-13 03:48:22.168536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.087 [2024-12-13 03:48:22.173789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.087 [2024-12-13 03:48:22.173857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.087 [2024-12-13 03:48:22.173882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.087 [2024-12-13 03:48:22.179323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.087 [2024-12-13 03:48:22.179414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.087 [2024-12-13 03:48:22.179439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:21.087 [2024-12-13 03:48:22.184879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.087 [2024-12-13 03:48:22.184989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.087 [2024-12-13 03:48:22.185014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.087 [2024-12-13 03:48:22.190873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.087 [2024-12-13 03:48:22.190954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.087 [2024-12-13 03:48:22.190995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.087 [2024-12-13 03:48:22.196258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.087 [2024-12-13 03:48:22.196330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.087 [2024-12-13 03:48:22.196355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.087 [2024-12-13 03:48:22.201845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.087 [2024-12-13 03:48:22.201911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.087 [2024-12-13 03:48:22.201941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:21.087 [2024-12-13 03:48:22.207359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.087 [2024-12-13 03:48:22.207439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.087 [2024-12-13 03:48:22.207464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.087 [2024-12-13 03:48:22.213931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.087 [2024-12-13 03:48:22.214001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.087 [2024-12-13 03:48:22.214025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.087 [2024-12-13 03:48:22.219539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.087 [2024-12-13 03:48:22.219618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.087 [2024-12-13 03:48:22.219644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.087 [2024-12-13 03:48:22.225726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.087 [2024-12-13 03:48:22.225830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.087 [2024-12-13 03:48:22.225855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:21.087 [2024-12-13 03:48:22.232902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.087 [2024-12-13 03:48:22.233028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.087 [2024-12-13 03:48:22.233054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.088 [2024-12-13 03:48:22.239791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.088 [2024-12-13 03:48:22.239914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.088 [2024-12-13 03:48:22.239955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.088 [2024-12-13 03:48:22.245862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.088 [2024-12-13 03:48:22.245935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.088 [2024-12-13 03:48:22.245960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.088 [2024-12-13 03:48:22.251886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.088 [2024-12-13 03:48:22.252004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.088 [2024-12-13 03:48:22.252029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:21.088 [2024-12-13 03:48:22.257466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.088 [2024-12-13 03:48:22.257533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.088 [2024-12-13 03:48:22.257558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.088 [2024-12-13 03:48:22.262460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.088 [2024-12-13 03:48:22.262539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.088 [2024-12-13 03:48:22.262564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.088 [2024-12-13 03:48:22.267505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.088 [2024-12-13 03:48:22.267578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.088 [2024-12-13 03:48:22.267603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.088 [2024-12-13 03:48:22.272898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.088 [2024-12-13 03:48:22.272994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.088 [2024-12-13 03:48:22.273020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:21.088 [2024-12-13 03:48:22.278234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.088 [2024-12-13 03:48:22.278299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.088 [2024-12-13 03:48:22.278323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.088 [2024-12-13 03:48:22.283893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.088 [2024-12-13 03:48:22.283987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.088 [2024-12-13 03:48:22.284012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.088 [2024-12-13 03:48:22.289163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.088 [2024-12-13 03:48:22.289240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.088 [2024-12-13 03:48:22.289265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.348 [2024-12-13 03:48:22.294118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.348 [2024-12-13 03:48:22.294194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.348 [2024-12-13 03:48:22.294218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:21.348 [2024-12-13 03:48:22.299547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.348 [2024-12-13 03:48:22.299620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.348 [2024-12-13 03:48:22.299645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.348 [2024-12-13 03:48:22.305021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.348 [2024-12-13 03:48:22.305101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.348 [2024-12-13 03:48:22.305126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.348 [2024-12-13 03:48:22.309790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.348 [2024-12-13 03:48:22.309873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.348 [2024-12-13 03:48:22.309899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.348 [2024-12-13 03:48:22.314489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.348 [2024-12-13 03:48:22.314559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.348 [2024-12-13 03:48:22.314584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:21.348 [2024-12-13 03:48:22.319140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.348 [2024-12-13 03:48:22.319209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.348 [2024-12-13 03:48:22.319233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.348 [2024-12-13 03:48:22.323824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.348 [2024-12-13 03:48:22.323909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.348 [2024-12-13 03:48:22.323950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.348 [2024-12-13 03:48:22.328519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.348 [2024-12-13 03:48:22.328602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.348 [2024-12-13 03:48:22.328627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.348 [2024-12-13 03:48:22.333294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.348 [2024-12-13 03:48:22.333366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.348 [2024-12-13 03:48:22.333391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:21.348 [2024-12-13 03:48:22.338202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.348 [2024-12-13 03:48:22.338273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.348 [2024-12-13 03:48:22.338299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.349 [2024-12-13 03:48:22.343346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.349 [2024-12-13 03:48:22.343429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.349 [2024-12-13 03:48:22.343454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.349 [2024-12-13 03:48:22.348664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.349 [2024-12-13 03:48:22.348749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.349 [2024-12-13 03:48:22.348774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.349 [2024-12-13 03:48:22.353971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.349 [2024-12-13 03:48:22.354040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.349 [2024-12-13 03:48:22.354072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:21.349 [2024-12-13 03:48:22.359639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.349 [2024-12-13 03:48:22.359733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.349 [2024-12-13 03:48:22.359758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.349 [2024-12-13 03:48:22.364996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.349 [2024-12-13 03:48:22.365066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.349 [2024-12-13 03:48:22.365090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.349 [2024-12-13 03:48:22.370326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.349 [2024-12-13 03:48:22.370405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.349 [2024-12-13 03:48:22.370430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.349 [2024-12-13 03:48:22.375714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.349 [2024-12-13 03:48:22.375803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.349 [2024-12-13 03:48:22.375828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:21.349 [2024-12-13 03:48:22.380900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.349 [2024-12-13 03:48:22.381027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.349 [2024-12-13 03:48:22.381052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.349 [2024-12-13 03:48:22.386803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.349 [2024-12-13 03:48:22.386928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.349 [2024-12-13 03:48:22.386953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.349 [2024-12-13 03:48:22.392307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.349 [2024-12-13 03:48:22.392416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.349 [2024-12-13 03:48:22.392440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.349 [2024-12-13 03:48:22.397767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.349 [2024-12-13 03:48:22.397846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.349 [2024-12-13 03:48:22.397871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:21.349 [2024-12-13 03:48:22.402927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.349 [2024-12-13 03:48:22.403009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.349 [2024-12-13 03:48:22.403034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.349 [2024-12-13 03:48:22.408130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.349 [2024-12-13 03:48:22.408238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.349 [2024-12-13 03:48:22.408262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.349 [2024-12-13 03:48:22.413221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.349 [2024-12-13 03:48:22.413372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.349 [2024-12-13 03:48:22.413396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.349 [2024-12-13 03:48:22.418834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.349 [2024-12-13 03:48:22.418924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.349 [2024-12-13 03:48:22.418950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:21.349 [2024-12-13 03:48:22.424494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.349 [2024-12-13 03:48:22.424566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.349 [2024-12-13 03:48:22.424591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.349 [2024-12-13 03:48:22.429902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.349 [2024-12-13 03:48:22.429991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.349 [2024-12-13 03:48:22.430016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.349 [2024-12-13 03:48:22.435068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.349 [2024-12-13 03:48:22.435149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.349 [2024-12-13 03:48:22.435186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.349 [2024-12-13 03:48:22.439857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.349 [2024-12-13 03:48:22.439938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.349 [2024-12-13 03:48:22.439962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:21.349 [2024-12-13 03:48:22.444599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.349 [2024-12-13 03:48:22.444670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.349 [2024-12-13 03:48:22.444695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.349 [2024-12-13 03:48:22.449298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.349 [2024-12-13 03:48:22.449382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.349 [2024-12-13 03:48:22.449406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.349 [2024-12-13 03:48:22.454028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.349 [2024-12-13 03:48:22.454107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.349 [2024-12-13 03:48:22.454132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.349 [2024-12-13 03:48:22.458824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.349 [2024-12-13 03:48:22.458924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.349 [2024-12-13 03:48:22.458949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:21.349 [2024-12-13 03:48:22.463614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.349 [2024-12-13 03:48:22.463715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.349 [2024-12-13 03:48:22.463742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.349 [2024-12-13 03:48:22.468343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.349 [2024-12-13 03:48:22.468420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.349 [2024-12-13 03:48:22.468445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.349 [2024-12-13 03:48:22.473182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.349 [2024-12-13 03:48:22.473248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.349 [2024-12-13 03:48:22.473273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.349 [2024-12-13 03:48:22.478017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.349 [2024-12-13 03:48:22.478105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.349 [2024-12-13 03:48:22.478131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:21.349 [2024-12-13 03:48:22.482886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.350 [2024-12-13 03:48:22.482972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.350 [2024-12-13 03:48:22.482998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.350 [2024-12-13 03:48:22.487667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.350 [2024-12-13 03:48:22.487756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.350 [2024-12-13 03:48:22.487781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.350 [2024-12-13 03:48:22.492438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.350 [2024-12-13 03:48:22.492511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.350 [2024-12-13 03:48:22.492536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.350 [2024-12-13 03:48:22.497249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.350 [2024-12-13 03:48:22.497315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.350 [2024-12-13 03:48:22.497340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:21.350 [2024-12-13 03:48:22.501996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.350 [2024-12-13 03:48:22.502077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.350 [2024-12-13 03:48:22.502102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.350 [2024-12-13 03:48:22.506798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.350 [2024-12-13 03:48:22.506889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.350 [2024-12-13 03:48:22.506915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.350 [2024-12-13 03:48:22.511574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.350 [2024-12-13 03:48:22.511641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.350 [2024-12-13 03:48:22.511666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.350 [2024-12-13 03:48:22.516246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.350 [2024-12-13 03:48:22.516360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.350 [2024-12-13 03:48:22.516385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:21.350 [2024-12-13 03:48:22.521132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.350 [2024-12-13 03:48:22.521232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.350 [2024-12-13 03:48:22.521257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.350 [2024-12-13 03:48:22.526259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.350 [2024-12-13 03:48:22.526327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.350 [2024-12-13 03:48:22.526351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.350 [2024-12-13 03:48:22.531988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.350 [2024-12-13 03:48:22.532179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.350 [2024-12-13 03:48:22.532204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.350 [2024-12-13 03:48:22.538157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.350 [2024-12-13 03:48:22.538283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.350 [2024-12-13 03:48:22.538307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:21.350 [2024-12-13 03:48:22.543727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.350 [2024-12-13 03:48:22.543822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.350 [2024-12-13 03:48:22.543846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.350 [2024-12-13 03:48:22.549737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.350 [2024-12-13 03:48:22.549816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.350 [2024-12-13 03:48:22.549846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.610 [2024-12-13 03:48:22.555501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.610 [2024-12-13 03:48:22.555573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.611 [2024-12-13 03:48:22.555597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.611 [2024-12-13 03:48:22.561042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.611 [2024-12-13 03:48:22.561112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.611 [2024-12-13 03:48:22.561137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:21.611 [2024-12-13 03:48:22.566463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.611 [2024-12-13 03:48:22.566537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.611 [2024-12-13 03:48:22.566562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.611 [2024-12-13 03:48:22.571803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.611 [2024-12-13 03:48:22.571880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.611 [2024-12-13 03:48:22.571905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.611 [2024-12-13 03:48:22.577119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.611 [2024-12-13 03:48:22.577212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.611 [2024-12-13 03:48:22.577237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.611 [2024-12-13 03:48:22.583114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.611 [2024-12-13 03:48:22.583182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.611 [2024-12-13 03:48:22.583208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:21.611 [2024-12-13 03:48:22.588474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.611 [2024-12-13 03:48:22.588542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.611 [2024-12-13 03:48:22.588568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.611 [2024-12-13 03:48:22.593728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.611 [2024-12-13 03:48:22.593801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.611 [2024-12-13 03:48:22.593826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.611 [2024-12-13 03:48:22.598590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.611 [2024-12-13 03:48:22.598660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.611 [2024-12-13 03:48:22.598685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.611 [2024-12-13 03:48:22.603418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.611 [2024-12-13 03:48:22.603494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.611 [2024-12-13 03:48:22.603520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:21.611 [2024-12-13 03:48:22.608295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.611 [2024-12-13 03:48:22.608430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.611 [2024-12-13 03:48:22.608455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.611 [2024-12-13 03:48:22.613098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.611 [2024-12-13 03:48:22.613180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.611 [2024-12-13 03:48:22.613206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.611 [2024-12-13 03:48:22.617964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.611 [2024-12-13 03:48:22.618039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.611 [2024-12-13 03:48:22.618063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.611 [2024-12-13 03:48:22.622730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.611 [2024-12-13 03:48:22.622861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.611 [2024-12-13 03:48:22.622891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:21.611 [2024-12-13 03:48:22.627491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.611 [2024-12-13 03:48:22.627578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.611 [2024-12-13 03:48:22.627603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.611 [2024-12-13 03:48:22.632308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.611 [2024-12-13 03:48:22.632383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.611 [2024-12-13 03:48:22.632408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.611 [2024-12-13 03:48:22.637127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.611 [2024-12-13 03:48:22.637197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.611 [2024-12-13 03:48:22.637225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.611 [2024-12-13 03:48:22.641823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.611 [2024-12-13 03:48:22.641900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.611 [2024-12-13 03:48:22.641931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:21.611 [2024-12-13 03:48:22.646588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.611 [2024-12-13 03:48:22.646659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.611 [2024-12-13 03:48:22.646683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.611 [2024-12-13 03:48:22.651389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.611 [2024-12-13 03:48:22.651522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.611 [2024-12-13 03:48:22.651547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.611 [2024-12-13 03:48:22.656120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.611 [2024-12-13 03:48:22.656191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.611 [2024-12-13 03:48:22.656215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.611 [2024-12-13 03:48:22.660785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.611 [2024-12-13 03:48:22.660882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.611 [2024-12-13 03:48:22.660908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:21.611 [2024-12-13 03:48:22.665628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.611 [2024-12-13 03:48:22.665694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.611 [2024-12-13 03:48:22.665719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.611 [2024-12-13 03:48:22.670389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.611 [2024-12-13 03:48:22.670467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.611 [2024-12-13 03:48:22.670492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.611 [2024-12-13 03:48:22.675126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.611 [2024-12-13 03:48:22.675206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.611 [2024-12-13 03:48:22.675231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.611 [2024-12-13 03:48:22.679900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.611 [2024-12-13 03:48:22.679995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.611 [2024-12-13 03:48:22.680047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:21.612 [2024-12-13 03:48:22.684695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.612 [2024-12-13 03:48:22.684764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.612 [2024-12-13 03:48:22.684789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.612 [2024-12-13 03:48:22.689584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.612 [2024-12-13 03:48:22.689654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.612 [2024-12-13 03:48:22.689680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.612 [2024-12-13 03:48:22.694313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.612 [2024-12-13 03:48:22.694399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.612 [2024-12-13 03:48:22.694424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.612 [2024-12-13 03:48:22.699005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.612 [2024-12-13 03:48:22.699091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.612 [2024-12-13 03:48:22.699117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:21.612 [2024-12-13 03:48:22.703776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.612 [2024-12-13 03:48:22.703861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.612 [2024-12-13 03:48:22.703886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.612 [2024-12-13 03:48:22.708558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.612 [2024-12-13 03:48:22.708626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.612 [2024-12-13 03:48:22.708651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.612 [2024-12-13 03:48:22.713280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.612 [2024-12-13 03:48:22.713346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.612 [2024-12-13 03:48:22.713371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.612 [2024-12-13 03:48:22.718036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.612 [2024-12-13 03:48:22.718118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.612 [2024-12-13 03:48:22.718146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:21.612 [2024-12-13 03:48:22.722806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.612 [2024-12-13 03:48:22.722887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.612 [2024-12-13 03:48:22.722912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.612 [2024-12-13 03:48:22.727656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.612 [2024-12-13 03:48:22.727727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.612 [2024-12-13 03:48:22.727751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.612 [2024-12-13 03:48:22.732378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.612 [2024-12-13 03:48:22.732463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.612 [2024-12-13 03:48:22.732487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.612 [2024-12-13 03:48:22.737119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.612 [2024-12-13 03:48:22.737182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.612 [2024-12-13 03:48:22.737206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:21.612 [2024-12-13 03:48:22.741875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.612 [2024-12-13 03:48:22.742015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.612 [2024-12-13 03:48:22.742039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.612 [2024-12-13 03:48:22.746678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.612 [2024-12-13 03:48:22.746760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.612 [2024-12-13 03:48:22.746785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.612 [2024-12-13 03:48:22.751419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.612 [2024-12-13 03:48:22.751489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.612 [2024-12-13 03:48:22.751513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.612 [2024-12-13 03:48:22.756130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.612 [2024-12-13 03:48:22.756197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.612 [2024-12-13 03:48:22.756222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:21.612 [2024-12-13 03:48:22.760938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.612 [2024-12-13 03:48:22.761017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.612 [2024-12-13 03:48:22.761042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.612 [2024-12-13 03:48:22.765691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.612 [2024-12-13 03:48:22.765760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.612 [2024-12-13 03:48:22.765785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.612 [2024-12-13 03:48:22.770494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.612 [2024-12-13 03:48:22.770629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.612 [2024-12-13 03:48:22.770653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.612 [2024-12-13 03:48:22.775299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.612 [2024-12-13 03:48:22.775370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.612 [2024-12-13 03:48:22.775394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:21.612 [2024-12-13 03:48:22.780034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.612 [2024-12-13 03:48:22.780149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.612 [2024-12-13 03:48:22.780173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.612 [2024-12-13 03:48:22.784770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.612 [2024-12-13 03:48:22.784839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.612 [2024-12-13 03:48:22.784864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.612 [2024-12-13 03:48:22.789747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.612 [2024-12-13 03:48:22.789883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.612 [2024-12-13 03:48:22.789907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:21.612 [2024-12-13 03:48:22.795400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.612 [2024-12-13 03:48:22.795545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.612 [2024-12-13 03:48:22.795570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:21.612 [2024-12-13 03:48:22.801366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.612 [2024-12-13 03:48:22.801509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.612 [2024-12-13 03:48:22.801538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:21.612 [2024-12-13 03:48:22.808328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173fef90 00:37:21.612 [2024-12-13 03:48:22.808528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.612 [2024-12-13 03:48:22.808554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:21.612 5590.00 IOPS, 698.75 MiB/s 00:37:21.612 Latency(us) 00:37:21.612 [2024-12-13T02:48:22.821Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:21.612 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:21.612 nvme0n1 : 2.00 5588.16 698.52 0.00 0.00 2857.86 2215.74 8301.23 00:37:21.612 [2024-12-13T02:48:22.821Z] =================================================================================================================== 00:37:21.612 [2024-12-13T02:48:22.821Z] Total : 5588.16 698.52 0.00 0.00 2857.86 2215.74 8301.23 00:37:21.871 { 00:37:21.871 "results": [ 00:37:21.871 { 00:37:21.871 "job": "nvme0n1", 00:37:21.871 "core_mask": "0x2", 00:37:21.871 "workload": "randwrite", 00:37:21.871 "status": "finished", 00:37:21.871 "queue_depth": 16, 00:37:21.872 "io_size": 131072, 00:37:21.872 "runtime": 2.004057, 00:37:21.872 "iops": 5588.164408497363, 00:37:21.872 "mibps": 698.5205510621704, 00:37:21.872 "io_failed": 0, 00:37:21.872 "io_timeout": 0, 00:37:21.872 "avg_latency_us": 2857.8630030742543, 00:37:21.872 "min_latency_us": 2215.7409523809524, 00:37:21.872 "max_latency_us": 8301.226666666667 00:37:21.872 } 00:37:21.872 ], 00:37:21.872 "core_count": 1 00:37:21.872 } 00:37:21.872 03:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:21.872 03:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:21.872 03:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:21.872 | .driver_specific 00:37:21.872 | .nvme_error 00:37:21.872 | .status_code 00:37:21.872 | .command_transient_transport_error' 00:37:21.872 03:48:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:21.872 03:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 361 > 0 )) 00:37:21.872 03:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2906908 00:37:21.872 03:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2906908 ']' 00:37:21.872 03:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2906908 00:37:21.872 03:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:21.872 03:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:21.872 03:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2906908 00:37:22.131 03:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:22.131 03:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:22.131 03:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2906908' 00:37:22.131 killing process with pid 2906908 00:37:22.131 03:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2906908 00:37:22.131 Received shutdown signal, test time was about 2.000000 seconds 00:37:22.131 00:37:22.131 Latency(us) 00:37:22.131 [2024-12-13T02:48:23.340Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:22.131 [2024-12-13T02:48:23.340Z] =================================================================================================================== 00:37:22.131 [2024-12-13T02:48:23.340Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:22.131 03:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2906908 00:37:23.068 03:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2904016 00:37:23.068 03:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2904016 ']' 00:37:23.068 03:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2904016 00:37:23.068 03:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:23.068 03:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:23.068 03:48:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2904016 00:37:23.068 03:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:23.068 03:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:23.068 03:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2904016' 00:37:23.068 killing process with pid 2904016 00:37:23.068 03:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2904016 00:37:23.068 03:48:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2904016 00:37:24.005 00:37:24.005 real 0m21.447s 00:37:24.005 user 0m40.271s 00:37:24.005 sys 0m4.899s 00:37:24.005 03:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:24.005 03:48:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:24.005 ************************************ 00:37:24.005 END TEST nvmf_digest_error 00:37:24.005 ************************************ 00:37:24.005 03:48:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:37:24.005 03:48:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:37:24.005 03:48:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:24.005 03:48:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:37:24.005 03:48:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:24.005 03:48:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:37:24.005 03:48:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:24.005 03:48:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:24.005 rmmod nvme_tcp 00:37:24.005 rmmod nvme_fabrics 00:37:24.005 rmmod nvme_keyring 00:37:24.005 03:48:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:24.005 03:48:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:37:24.005 03:48:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:37:24.264 03:48:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2904016 ']' 00:37:24.264 03:48:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2904016 00:37:24.264 03:48:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2904016 ']' 00:37:24.264 03:48:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2904016 00:37:24.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2904016) - No such process 00:37:24.264 03:48:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2904016 is not found' 00:37:24.264 Process with pid 2904016 is not found 00:37:24.264 03:48:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:24.264 03:48:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:24.264 03:48:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:24.264 03:48:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:37:24.264 03:48:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:37:24.264 03:48:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:24.264 03:48:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:37:24.264 03:48:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:24.264 03:48:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:24.264 03:48:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:24.264 03:48:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:24.264 03:48:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:26.168 03:48:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:26.168 00:37:26.168 real 0m51.711s 00:37:26.168 user 1m23.822s 00:37:26.168 sys 0m13.986s 00:37:26.168 03:48:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:26.168 03:48:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:26.168 ************************************ 00:37:26.168 END TEST nvmf_digest 00:37:26.168 ************************************ 00:37:26.168 03:48:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:37:26.168 03:48:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:37:26.168 03:48:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:37:26.168 03:48:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:26.168 03:48:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:26.168 03:48:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:26.168 03:48:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:26.168 ************************************ 00:37:26.168 START TEST nvmf_bdevperf 00:37:26.168 ************************************ 00:37:26.168 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:26.428 * Looking for test storage... 00:37:26.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:26.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:26.428 --rc genhtml_branch_coverage=1 00:37:26.428 --rc genhtml_function_coverage=1 00:37:26.428 --rc genhtml_legend=1 00:37:26.428 --rc geninfo_all_blocks=1 00:37:26.428 --rc geninfo_unexecuted_blocks=1 00:37:26.428 00:37:26.428 ' 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:26.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:26.428 --rc genhtml_branch_coverage=1 00:37:26.428 --rc genhtml_function_coverage=1 00:37:26.428 --rc genhtml_legend=1 00:37:26.428 --rc geninfo_all_blocks=1 00:37:26.428 --rc geninfo_unexecuted_blocks=1 00:37:26.428 00:37:26.428 ' 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:26.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:26.428 --rc genhtml_branch_coverage=1 00:37:26.428 --rc genhtml_function_coverage=1 00:37:26.428 --rc genhtml_legend=1 00:37:26.428 --rc geninfo_all_blocks=1 00:37:26.428 --rc geninfo_unexecuted_blocks=1 00:37:26.428 00:37:26.428 ' 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:26.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:26.428 --rc genhtml_branch_coverage=1 00:37:26.428 --rc genhtml_function_coverage=1 00:37:26.428 --rc genhtml_legend=1 00:37:26.428 --rc geninfo_all_blocks=1 00:37:26.428 --rc geninfo_unexecuted_blocks=1 00:37:26.428 00:37:26.428 ' 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:26.428 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:26.429 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:26.429 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:26.429 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:37:26.429 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:26.429 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:26.429 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:26.429 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.429 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.429 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.429 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:37:26.429 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.429 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:37:26.429 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:26.429 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:26.429 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:26.429 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:26.429 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:26.429 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:26.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:26.429 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:26.429 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:26.429 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:26.429 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:26.429 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:26.429 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:37:26.429 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:26.429 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:26.429 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:26.429 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:26.429 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:26.429 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:26.429 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:26.429 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:26.429 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:26.429 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:26.429 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:37:26.429 03:48:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:31.703 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:31.703 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:31.704 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:31.704 Found net devices under 0000:af:00.0: cvl_0_0 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:31.704 Found net devices under 0000:af:00.1: cvl_0_1 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:31.704 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:31.963 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:31.963 03:48:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:31.963 03:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:31.963 03:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:31.963 03:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:31.963 03:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:31.963 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:31.963 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:37:31.963 00:37:31.963 --- 10.0.0.2 ping statistics --- 00:37:31.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:31.963 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:37:31.963 03:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:31.963 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:31.963 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:37:31.963 00:37:31.963 --- 10.0.0.1 ping statistics --- 00:37:31.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:31.963 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:37:31.964 03:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:31.964 03:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:37:31.964 03:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:31.964 03:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:31.964 03:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:31.964 03:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:31.964 03:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:31.964 03:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:31.964 03:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:31.964 03:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:37:31.964 03:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:31.964 03:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:31.964 03:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:31.964 03:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:31.964 03:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2911093 00:37:31.964 03:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2911093 00:37:31.964 03:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:31.964 03:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2911093 ']' 00:37:31.964 03:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:31.964 03:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:31.964 03:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:31.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:31.964 03:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:31.964 03:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:31.964 [2024-12-13 03:48:33.154651] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:37:31.964 [2024-12-13 03:48:33.154740] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:32.223 [2024-12-13 03:48:33.273212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:32.223 [2024-12-13 03:48:33.382970] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:32.223 [2024-12-13 03:48:33.383013] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:32.223 [2024-12-13 03:48:33.383025] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:32.223 [2024-12-13 03:48:33.383036] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:32.223 [2024-12-13 03:48:33.383045] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:32.223 [2024-12-13 03:48:33.385008] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:37:32.223 [2024-12-13 03:48:33.385071] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:32.223 [2024-12-13 03:48:33.385080] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:37:32.791 03:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:32.791 03:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:37:32.791 03:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:32.791 03:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:32.791 03:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:32.791 03:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:32.791 03:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:32.791 03:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.791 03:48:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:33.050 [2024-12-13 03:48:34.001050] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:33.050 03:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.050 03:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:33.050 03:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:33.050 03:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:33.050 Malloc0 00:37:33.050 03:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.051 03:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:33.051 03:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:33.051 03:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:33.051 03:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.051 03:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:33.051 03:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:33.051 03:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:33.051 03:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.051 03:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:33.051 03:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:33.051 03:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:33.051 [2024-12-13 03:48:34.126717] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:33.051 03:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:33.051 03:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:37:33.051 03:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:37:33.051 03:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:37:33.051 03:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:37:33.051 03:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:33.051 03:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:33.051 { 00:37:33.051 "params": { 00:37:33.051 "name": "Nvme$subsystem", 00:37:33.051 "trtype": "$TEST_TRANSPORT", 00:37:33.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:33.051 "adrfam": "ipv4", 00:37:33.051 "trsvcid": "$NVMF_PORT", 00:37:33.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:33.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:33.051 "hdgst": ${hdgst:-false}, 00:37:33.051 "ddgst": ${ddgst:-false} 00:37:33.051 }, 00:37:33.051 "method": "bdev_nvme_attach_controller" 00:37:33.051 } 00:37:33.051 EOF 00:37:33.051 )") 00:37:33.051 03:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:37:33.051 03:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:37:33.051 03:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:37:33.051 03:48:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:33.051 "params": { 00:37:33.051 "name": "Nvme1", 00:37:33.051 "trtype": "tcp", 00:37:33.051 "traddr": "10.0.0.2", 00:37:33.051 "adrfam": "ipv4", 00:37:33.051 "trsvcid": "4420", 00:37:33.051 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:33.051 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:33.051 "hdgst": false, 00:37:33.051 "ddgst": false 00:37:33.051 }, 00:37:33.051 "method": "bdev_nvme_attach_controller" 00:37:33.051 }' 00:37:33.051 [2024-12-13 03:48:34.206455] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:37:33.051 [2024-12-13 03:48:34.206542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2911321 ] 00:37:33.310 [2024-12-13 03:48:34.320255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:33.310 [2024-12-13 03:48:34.438179] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:37:33.877 Running I/O for 1 seconds... 00:37:34.811 9708.00 IOPS, 37.92 MiB/s 00:37:34.811 Latency(us) 00:37:34.811 [2024-12-13T02:48:36.020Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:34.811 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:34.811 Verification LBA range: start 0x0 length 0x4000 00:37:34.811 Nvme1n1 : 1.00 9790.73 38.25 0.00 0.00 13022.51 1154.68 10298.51 00:37:34.811 [2024-12-13T02:48:36.020Z] =================================================================================================================== 00:37:34.811 [2024-12-13T02:48:36.020Z] Total : 9790.73 38.25 0.00 0.00 13022.51 1154.68 10298.51 00:37:35.748 03:48:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2911769 00:37:35.748 03:48:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:37:35.748 03:48:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:37:35.748 03:48:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:37:35.748 03:48:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:37:35.748 03:48:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:37:35.748 03:48:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:35.748 03:48:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:35.748 { 00:37:35.748 "params": { 00:37:35.748 "name": "Nvme$subsystem", 00:37:35.748 "trtype": "$TEST_TRANSPORT", 00:37:35.748 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:35.748 "adrfam": "ipv4", 00:37:35.748 "trsvcid": "$NVMF_PORT", 00:37:35.748 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:35.748 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:35.748 "hdgst": ${hdgst:-false}, 00:37:35.748 "ddgst": ${ddgst:-false} 00:37:35.748 }, 00:37:35.748 "method": "bdev_nvme_attach_controller" 00:37:35.748 } 00:37:35.748 EOF 00:37:35.748 )") 00:37:35.748 03:48:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:37:35.748 03:48:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:37:35.748 03:48:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:37:35.748 03:48:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:35.748 "params": { 00:37:35.748 "name": "Nvme1", 00:37:35.748 "trtype": "tcp", 00:37:35.748 "traddr": "10.0.0.2", 00:37:35.748 "adrfam": "ipv4", 00:37:35.748 "trsvcid": "4420", 00:37:35.748 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:35.748 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:35.748 "hdgst": false, 00:37:35.748 "ddgst": false 00:37:35.748 }, 00:37:35.748 "method": "bdev_nvme_attach_controller" 00:37:35.748 }' 00:37:36.007 [2024-12-13 03:48:36.973444] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:37:36.007 [2024-12-13 03:48:36.973532] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2911769 ] 00:37:36.007 [2024-12-13 03:48:37.088238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:36.007 [2024-12-13 03:48:37.202119] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:37:36.575 Running I/O for 15 seconds... 00:37:38.445 9649.00 IOPS, 37.69 MiB/s [2024-12-13T02:48:39.913Z] 9728.50 IOPS, 38.00 MiB/s [2024-12-13T02:48:39.913Z] 03:48:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2911093 00:37:38.704 03:48:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:37:38.965 [2024-12-13 03:48:39.926208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:45192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.965 [2024-12-13 03:48:39.926265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.965 [2024-12-13 03:48:39.926293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:45200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.965 [2024-12-13 03:48:39.926306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.965 [2024-12-13 03:48:39.926319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:45208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.965 [2024-12-13 03:48:39.926330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.965 [2024-12-13 03:48:39.926342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.965 [2024-12-13 03:48:39.926353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.965 [2024-12-13 03:48:39.926365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:45224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.965 [2024-12-13 03:48:39.926375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.965 [2024-12-13 03:48:39.926386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:45232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.965 [2024-12-13 03:48:39.926397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.965 [2024-12-13 03:48:39.926410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:45240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.965 [2024-12-13 03:48:39.926420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.965 [2024-12-13 03:48:39.926432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:45248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.965 [2024-12-13 03:48:39.926442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.965 [2024-12-13 03:48:39.926454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:45256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.965 [2024-12-13 03:48:39.926463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.965 [2024-12-13 03:48:39.926475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:45264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.965 [2024-12-13 03:48:39.926484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.965 [2024-12-13 03:48:39.926496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:45272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.965 [2024-12-13 03:48:39.926505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.965 [2024-12-13 03:48:39.926526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:45280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.965 [2024-12-13 03:48:39.926536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.965 [2024-12-13 03:48:39.926547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:45288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.965 [2024-12-13 03:48:39.926556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.965 [2024-12-13 03:48:39.926568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:45296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.965 [2024-12-13 03:48:39.926580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.965 [2024-12-13 03:48:39.926591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.965 [2024-12-13 03:48:39.926600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.965 [2024-12-13 03:48:39.926611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:45312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.965 [2024-12-13 03:48:39.926621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.965 [2024-12-13 03:48:39.926633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.965 [2024-12-13 03:48:39.926643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.965 [2024-12-13 03:48:39.926656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.965 [2024-12-13 03:48:39.926666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.965 [2024-12-13 03:48:39.926677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.965 [2024-12-13 03:48:39.926686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.965 [2024-12-13 03:48:39.926697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.965 [2024-12-13 03:48:39.926706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.965 [2024-12-13 03:48:39.926718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.965 [2024-12-13 03:48:39.926729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.965 [2024-12-13 03:48:39.926742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.965 [2024-12-13 03:48:39.926751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.965 [2024-12-13 03:48:39.926764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.965 [2024-12-13 03:48:39.926774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.965 [2024-12-13 03:48:39.926786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.965 [2024-12-13 03:48:39.926797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.965 [2024-12-13 03:48:39.926810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:46080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.965 [2024-12-13 03:48:39.926821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.965 [2024-12-13 03:48:39.926834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.965 [2024-12-13 03:48:39.926844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.965 [2024-12-13 03:48:39.926859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:46096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.965 [2024-12-13 03:48:39.926868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.965 [2024-12-13 03:48:39.926879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:46104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.965 [2024-12-13 03:48:39.926888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.965 [2024-12-13 03:48:39.926899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:46112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.965 [2024-12-13 03:48:39.926909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.965 [2024-12-13 03:48:39.926927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.966 [2024-12-13 03:48:39.926937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.926948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:46128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.966 [2024-12-13 03:48:39.926957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.926969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:46136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.966 [2024-12-13 03:48:39.926978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.926990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:46144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.966 [2024-12-13 03:48:39.926999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:46152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.966 [2024-12-13 03:48:39.927021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:46160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.966 [2024-12-13 03:48:39.927042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:46168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.966 [2024-12-13 03:48:39.927064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:46176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.966 [2024-12-13 03:48:39.927084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.966 [2024-12-13 03:48:39.927106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:46192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.966 [2024-12-13 03:48:39.927129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.966 [2024-12-13 03:48:39.927149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:45320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.966 [2024-12-13 03:48:39.927171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:45328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.966 [2024-12-13 03:48:39.927192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:45336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.966 [2024-12-13 03:48:39.927212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:45344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.966 [2024-12-13 03:48:39.927233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:45352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.966 [2024-12-13 03:48:39.927254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:45360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.966 [2024-12-13 03:48:39.927275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:45368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.966 [2024-12-13 03:48:39.927296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:45376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.966 [2024-12-13 03:48:39.927316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:45384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.966 [2024-12-13 03:48:39.927337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:45392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.966 [2024-12-13 03:48:39.927358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:45400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.966 [2024-12-13 03:48:39.927379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:45408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.966 [2024-12-13 03:48:39.927401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:45416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.966 [2024-12-13 03:48:39.927421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:45424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.966 [2024-12-13 03:48:39.927441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:45432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.966 [2024-12-13 03:48:39.927462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:45440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.966 [2024-12-13 03:48:39.927482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:45448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.966 [2024-12-13 03:48:39.927503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:45456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.966 [2024-12-13 03:48:39.927523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:45464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.966 [2024-12-13 03:48:39.927544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:45472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.966 [2024-12-13 03:48:39.927564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:45480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.966 [2024-12-13 03:48:39.927584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:45488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.966 [2024-12-13 03:48:39.927606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:45496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.966 [2024-12-13 03:48:39.927626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:45504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.966 [2024-12-13 03:48:39.927648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:45512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.966 [2024-12-13 03:48:39.927668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:45520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.966 [2024-12-13 03:48:39.927689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:45528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.966 [2024-12-13 03:48:39.927710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.966 [2024-12-13 03:48:39.927729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:45544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.966 [2024-12-13 03:48:39.927750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:45552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.966 [2024-12-13 03:48:39.927770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.966 [2024-12-13 03:48:39.927781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:45560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.927791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.927802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:45568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.927811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.927822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:45576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.927831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.927842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:45584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.927852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.927862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:45592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.927872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.927888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:45600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.927897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.927909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.927924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.927936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:45616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.927945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.927956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:45624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.927966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.927977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:45632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.927987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.927997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:45640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.928007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.928019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:45648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.928028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.928039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:45656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.928048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.928059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:45664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.928069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.928081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:45672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.928090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.928101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:45680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.928110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.928122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.928131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.928142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:45696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.928151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.928162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:45704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.928171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.928184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:45712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.928193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.928204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:45720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.928214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.928225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:45728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.928234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.928247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:45736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.928256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.928267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:45744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.928277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.928288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.928298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.928309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.928320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.928331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:45768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.928340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.928352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:45776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.928361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.928373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:45784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.928382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.928394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:45792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.928403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.928414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:45800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.928424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.928435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:45808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.928446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.928457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:45816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.928466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.928477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.967 [2024-12-13 03:48:39.928487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.928498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:45824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.928507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.928518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:45832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.928528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.928539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:45840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.928548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.928559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.928569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.928580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:45856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.928590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.928601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:45864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.928610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.967 [2024-12-13 03:48:39.928621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:45872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.967 [2024-12-13 03:48:39.928631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.968 [2024-12-13 03:48:39.928642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:45880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.968 [2024-12-13 03:48:39.928651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.968 [2024-12-13 03:48:39.928662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:45888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.968 [2024-12-13 03:48:39.928672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.968 [2024-12-13 03:48:39.928683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:45896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.968 [2024-12-13 03:48:39.928693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.968 [2024-12-13 03:48:39.928705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:45904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.968 [2024-12-13 03:48:39.928714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.968 [2024-12-13 03:48:39.928725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:45912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.968 [2024-12-13 03:48:39.928735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.968 [2024-12-13 03:48:39.928746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:45920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.968 [2024-12-13 03:48:39.928755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.968 [2024-12-13 03:48:39.928766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:45928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.968 [2024-12-13 03:48:39.928775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.968 [2024-12-13 03:48:39.928787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:45936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.968 [2024-12-13 03:48:39.928796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.968 [2024-12-13 03:48:39.928807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:45944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.968 [2024-12-13 03:48:39.928817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.968 [2024-12-13 03:48:39.928828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:45952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.968 [2024-12-13 03:48:39.928838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.968 [2024-12-13 03:48:39.928849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:45960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.968 [2024-12-13 03:48:39.928858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.968 [2024-12-13 03:48:39.928869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:45968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.968 [2024-12-13 03:48:39.928879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.968 [2024-12-13 03:48:39.928890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:45976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.968 [2024-12-13 03:48:39.928900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.968 [2024-12-13 03:48:39.928910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:45984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.968 [2024-12-13 03:48:39.928923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.968 [2024-12-13 03:48:39.928935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:45992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.968 [2024-12-13 03:48:39.928945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.968 [2024-12-13 03:48:39.928956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:46000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:38.968 [2024-12-13 03:48:39.928967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.968 [2024-12-13 03:48:39.928978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000326480 is same with the state(6) to be set 00:37:38.968 [2024-12-13 03:48:39.928991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:38.968 [2024-12-13 03:48:39.929007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:38.968 [2024-12-13 03:48:39.929017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46008 len:8 PRP1 0x0 PRP2 0x0 00:37:38.968 [2024-12-13 03:48:39.929028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:38.968 [2024-12-13 03:48:39.932485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:38.968 [2024-12-13 03:48:39.932568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:38.968 [2024-12-13 03:48:39.933255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.968 [2024-12-13 03:48:39.933282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:38.968 [2024-12-13 03:48:39.933294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:38.968 [2024-12-13 03:48:39.933496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:38.968 [2024-12-13 03:48:39.933695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:38.968 [2024-12-13 03:48:39.933707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:38.968 [2024-12-13 03:48:39.933720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:38.968 [2024-12-13 03:48:39.933732] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:38.968 [2024-12-13 03:48:39.946150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:38.968 [2024-12-13 03:48:39.946545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.968 [2024-12-13 03:48:39.946567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:38.968 [2024-12-13 03:48:39.946578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:38.968 [2024-12-13 03:48:39.946775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:38.968 [2024-12-13 03:48:39.946978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:38.968 [2024-12-13 03:48:39.946991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:38.968 [2024-12-13 03:48:39.947000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:38.968 [2024-12-13 03:48:39.947009] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:38.968 [2024-12-13 03:48:39.959442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:38.968 [2024-12-13 03:48:39.959935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.968 [2024-12-13 03:48:39.959997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:38.968 [2024-12-13 03:48:39.960031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:38.968 [2024-12-13 03:48:39.960552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:38.968 [2024-12-13 03:48:39.960742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:38.968 [2024-12-13 03:48:39.960753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:38.968 [2024-12-13 03:48:39.960763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:38.968 [2024-12-13 03:48:39.960772] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:38.968 [2024-12-13 03:48:39.972475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:38.968 [2024-12-13 03:48:39.972928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.968 [2024-12-13 03:48:39.972950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:38.968 [2024-12-13 03:48:39.972960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:38.968 [2024-12-13 03:48:39.973151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:38.968 [2024-12-13 03:48:39.973341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:38.968 [2024-12-13 03:48:39.973352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:38.968 [2024-12-13 03:48:39.973361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:38.968 [2024-12-13 03:48:39.973370] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:38.968 [2024-12-13 03:48:39.985649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:38.968 [2024-12-13 03:48:39.986087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.968 [2024-12-13 03:48:39.986110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:38.968 [2024-12-13 03:48:39.986120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:38.968 [2024-12-13 03:48:39.986309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:38.968 [2024-12-13 03:48:39.986498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:38.968 [2024-12-13 03:48:39.986509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:38.968 [2024-12-13 03:48:39.986518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:38.968 [2024-12-13 03:48:39.986527] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:38.968 [2024-12-13 03:48:39.998681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:38.968 [2024-12-13 03:48:39.999112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.968 [2024-12-13 03:48:39.999133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:38.968 [2024-12-13 03:48:39.999142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:38.968 [2024-12-13 03:48:39.999323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:38.969 [2024-12-13 03:48:39.999503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:38.969 [2024-12-13 03:48:39.999517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:38.969 [2024-12-13 03:48:39.999526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:38.969 [2024-12-13 03:48:39.999534] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:38.969 [2024-12-13 03:48:40.012055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:38.969 [2024-12-13 03:48:40.012540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.969 [2024-12-13 03:48:40.012562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:38.969 [2024-12-13 03:48:40.012573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:38.969 [2024-12-13 03:48:40.012769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:38.969 [2024-12-13 03:48:40.012972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:38.969 [2024-12-13 03:48:40.012984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:38.969 [2024-12-13 03:48:40.012993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:38.969 [2024-12-13 03:48:40.013003] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:38.969 [2024-12-13 03:48:40.025596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:38.969 [2024-12-13 03:48:40.026062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.969 [2024-12-13 03:48:40.026089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:38.969 [2024-12-13 03:48:40.026101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:38.969 [2024-12-13 03:48:40.026338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:38.969 [2024-12-13 03:48:40.026574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:38.969 [2024-12-13 03:48:40.026587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:38.969 [2024-12-13 03:48:40.026597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:38.969 [2024-12-13 03:48:40.026608] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:38.969 [2024-12-13 03:48:40.038895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:38.969 [2024-12-13 03:48:40.039386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.969 [2024-12-13 03:48:40.039408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:38.969 [2024-12-13 03:48:40.039419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:38.969 [2024-12-13 03:48:40.039613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:38.969 [2024-12-13 03:48:40.039808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:38.969 [2024-12-13 03:48:40.039820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:38.969 [2024-12-13 03:48:40.039828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:38.969 [2024-12-13 03:48:40.039841] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:38.969 [2024-12-13 03:48:40.052258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:38.969 [2024-12-13 03:48:40.052686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.969 [2024-12-13 03:48:40.052708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:38.969 [2024-12-13 03:48:40.052719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:38.969 [2024-12-13 03:48:40.052933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:38.969 [2024-12-13 03:48:40.053173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:38.969 [2024-12-13 03:48:40.053186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:38.969 [2024-12-13 03:48:40.053195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:38.969 [2024-12-13 03:48:40.053222] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:38.969 [2024-12-13 03:48:40.065715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:38.969 [2024-12-13 03:48:40.066123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.969 [2024-12-13 03:48:40.066145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:38.969 [2024-12-13 03:48:40.066156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:38.969 [2024-12-13 03:48:40.066350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:38.969 [2024-12-13 03:48:40.066544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:38.969 [2024-12-13 03:48:40.066556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:38.969 [2024-12-13 03:48:40.066565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:38.969 [2024-12-13 03:48:40.066574] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:38.969 [2024-12-13 03:48:40.079040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:38.969 [2024-12-13 03:48:40.079522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.969 [2024-12-13 03:48:40.079543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:38.969 [2024-12-13 03:48:40.079559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:38.969 [2024-12-13 03:48:40.079749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:38.969 [2024-12-13 03:48:40.079944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:38.969 [2024-12-13 03:48:40.079955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:38.969 [2024-12-13 03:48:40.079964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:38.969 [2024-12-13 03:48:40.079974] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:38.969 [2024-12-13 03:48:40.092429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:38.969 [2024-12-13 03:48:40.092905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.969 [2024-12-13 03:48:40.092934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:38.969 [2024-12-13 03:48:40.092945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:38.969 [2024-12-13 03:48:40.093140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:38.969 [2024-12-13 03:48:40.093335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:38.969 [2024-12-13 03:48:40.093346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:38.969 [2024-12-13 03:48:40.093355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:38.969 [2024-12-13 03:48:40.093364] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:38.969 [2024-12-13 03:48:40.105902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:38.969 [2024-12-13 03:48:40.106365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.969 [2024-12-13 03:48:40.106388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:38.969 [2024-12-13 03:48:40.106398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:38.969 [2024-12-13 03:48:40.106589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:38.969 [2024-12-13 03:48:40.106779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:38.969 [2024-12-13 03:48:40.106791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:38.969 [2024-12-13 03:48:40.106800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:38.969 [2024-12-13 03:48:40.106810] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:38.969 [2024-12-13 03:48:40.119163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:38.970 [2024-12-13 03:48:40.119639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.970 [2024-12-13 03:48:40.119697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:38.970 [2024-12-13 03:48:40.119738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:38.970 [2024-12-13 03:48:40.119935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:38.970 [2024-12-13 03:48:40.120146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:38.970 [2024-12-13 03:48:40.120158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:38.970 [2024-12-13 03:48:40.120167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:38.970 [2024-12-13 03:48:40.120177] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:38.970 [2024-12-13 03:48:40.132294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:38.970 [2024-12-13 03:48:40.132777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.970 [2024-12-13 03:48:40.132835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:38.970 [2024-12-13 03:48:40.132875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:38.970 [2024-12-13 03:48:40.133542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:38.970 [2024-12-13 03:48:40.134140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:38.970 [2024-12-13 03:48:40.134161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:38.970 [2024-12-13 03:48:40.134171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:38.970 [2024-12-13 03:48:40.134180] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:38.970 [2024-12-13 03:48:40.145420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:38.970 [2024-12-13 03:48:40.145909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.970 [2024-12-13 03:48:40.145998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:38.970 [2024-12-13 03:48:40.146031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:38.970 [2024-12-13 03:48:40.146681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:38.970 [2024-12-13 03:48:40.147156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:38.970 [2024-12-13 03:48:40.147168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:38.970 [2024-12-13 03:48:40.147177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:38.970 [2024-12-13 03:48:40.147186] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:38.970 [2024-12-13 03:48:40.158653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:38.970 [2024-12-13 03:48:40.159154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:38.970 [2024-12-13 03:48:40.159213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:38.970 [2024-12-13 03:48:40.159245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:38.970 [2024-12-13 03:48:40.159894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:38.970 [2024-12-13 03:48:40.160372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:38.970 [2024-12-13 03:48:40.160384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:38.970 [2024-12-13 03:48:40.160393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:38.970 [2024-12-13 03:48:40.160402] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.230 [2024-12-13 03:48:40.172133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.230 [2024-12-13 03:48:40.172475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.230 [2024-12-13 03:48:40.172534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.230 [2024-12-13 03:48:40.172575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.230 [2024-12-13 03:48:40.173235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.230 [2024-12-13 03:48:40.173438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.230 [2024-12-13 03:48:40.173450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.230 [2024-12-13 03:48:40.173459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.230 [2024-12-13 03:48:40.173468] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.230 [2024-12-13 03:48:40.185396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.230 [2024-12-13 03:48:40.185882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.230 [2024-12-13 03:48:40.185905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.230 [2024-12-13 03:48:40.185915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.230 [2024-12-13 03:48:40.186118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.230 [2024-12-13 03:48:40.186313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.230 [2024-12-13 03:48:40.186325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.230 [2024-12-13 03:48:40.186335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.230 [2024-12-13 03:48:40.186345] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.230 [2024-12-13 03:48:40.198808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.230 [2024-12-13 03:48:40.199274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.230 [2024-12-13 03:48:40.199298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.230 [2024-12-13 03:48:40.199308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.230 [2024-12-13 03:48:40.199503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.230 [2024-12-13 03:48:40.199699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.230 [2024-12-13 03:48:40.199711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.230 [2024-12-13 03:48:40.199720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.230 [2024-12-13 03:48:40.199730] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.230 [2024-12-13 03:48:40.212191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.230 [2024-12-13 03:48:40.212718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.230 [2024-12-13 03:48:40.212741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.230 [2024-12-13 03:48:40.212751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.230 [2024-12-13 03:48:40.212955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.230 [2024-12-13 03:48:40.213152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.230 [2024-12-13 03:48:40.213163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.230 [2024-12-13 03:48:40.213176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.230 [2024-12-13 03:48:40.213185] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.230 [2024-12-13 03:48:40.225482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.230 [2024-12-13 03:48:40.225973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.230 [2024-12-13 03:48:40.226034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.230 [2024-12-13 03:48:40.226068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.230 [2024-12-13 03:48:40.226551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.230 [2024-12-13 03:48:40.226741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.230 [2024-12-13 03:48:40.226753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.230 [2024-12-13 03:48:40.226762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.230 [2024-12-13 03:48:40.226771] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.230 [2024-12-13 03:48:40.238649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.230 [2024-12-13 03:48:40.239085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.230 [2024-12-13 03:48:40.239109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.230 [2024-12-13 03:48:40.239120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.230 [2024-12-13 03:48:40.239310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.230 [2024-12-13 03:48:40.239500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.230 [2024-12-13 03:48:40.239512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.230 [2024-12-13 03:48:40.239522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.230 [2024-12-13 03:48:40.239531] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.230 [2024-12-13 03:48:40.251782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.230 [2024-12-13 03:48:40.252137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.230 [2024-12-13 03:48:40.252198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.230 [2024-12-13 03:48:40.252232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.230 [2024-12-13 03:48:40.252882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.230 [2024-12-13 03:48:40.253459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.230 [2024-12-13 03:48:40.253478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.230 [2024-12-13 03:48:40.253493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.230 [2024-12-13 03:48:40.253507] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.230 [2024-12-13 03:48:40.265904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.230 [2024-12-13 03:48:40.266348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.230 [2024-12-13 03:48:40.266372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.230 [2024-12-13 03:48:40.266383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.230 [2024-12-13 03:48:40.266590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.231 [2024-12-13 03:48:40.266797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.231 [2024-12-13 03:48:40.266809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.231 [2024-12-13 03:48:40.266818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.231 [2024-12-13 03:48:40.266828] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.231 [2024-12-13 03:48:40.279312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.231 [2024-12-13 03:48:40.279708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.231 [2024-12-13 03:48:40.279729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.231 [2024-12-13 03:48:40.279739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.231 [2024-12-13 03:48:40.279942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.231 [2024-12-13 03:48:40.280138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.231 [2024-12-13 03:48:40.280149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.231 [2024-12-13 03:48:40.280158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.231 [2024-12-13 03:48:40.280167] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.231 [2024-12-13 03:48:40.292673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.231 [2024-12-13 03:48:40.293123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.231 [2024-12-13 03:48:40.293145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.231 [2024-12-13 03:48:40.293156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.231 [2024-12-13 03:48:40.293346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.231 [2024-12-13 03:48:40.293536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.231 [2024-12-13 03:48:40.293547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.231 [2024-12-13 03:48:40.293556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.231 [2024-12-13 03:48:40.293565] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.231 [2024-12-13 03:48:40.305861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.231 [2024-12-13 03:48:40.306196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.231 [2024-12-13 03:48:40.306220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.231 [2024-12-13 03:48:40.306230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.231 [2024-12-13 03:48:40.306419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.231 [2024-12-13 03:48:40.306607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.231 [2024-12-13 03:48:40.306619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.231 [2024-12-13 03:48:40.306628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.231 [2024-12-13 03:48:40.306637] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.231 [2024-12-13 03:48:40.319043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.231 [2024-12-13 03:48:40.319381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.231 [2024-12-13 03:48:40.319402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.231 [2024-12-13 03:48:40.319412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.231 [2024-12-13 03:48:40.319606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.231 [2024-12-13 03:48:40.319801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.231 [2024-12-13 03:48:40.319813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.231 [2024-12-13 03:48:40.319823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.231 [2024-12-13 03:48:40.319832] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.231 [2024-12-13 03:48:40.332286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.231 [2024-12-13 03:48:40.332769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.231 [2024-12-13 03:48:40.332831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.231 [2024-12-13 03:48:40.332864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.231 [2024-12-13 03:48:40.333455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.231 [2024-12-13 03:48:40.333645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.231 [2024-12-13 03:48:40.333656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.231 [2024-12-13 03:48:40.333665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.231 [2024-12-13 03:48:40.333675] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.231 [2024-12-13 03:48:40.345523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.231 [2024-12-13 03:48:40.345928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.231 [2024-12-13 03:48:40.345950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.231 [2024-12-13 03:48:40.345960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.231 [2024-12-13 03:48:40.346153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.231 [2024-12-13 03:48:40.346342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.231 [2024-12-13 03:48:40.346354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.231 [2024-12-13 03:48:40.346363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.231 [2024-12-13 03:48:40.346373] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.231 [2024-12-13 03:48:40.358623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.231 [2024-12-13 03:48:40.359023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.231 [2024-12-13 03:48:40.359046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.231 [2024-12-13 03:48:40.359056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.231 [2024-12-13 03:48:40.359245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.231 [2024-12-13 03:48:40.359434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.231 [2024-12-13 03:48:40.359445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.231 [2024-12-13 03:48:40.359454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.231 [2024-12-13 03:48:40.359463] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.231 [2024-12-13 03:48:40.372071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.231 [2024-12-13 03:48:40.372445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.231 [2024-12-13 03:48:40.372467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.231 [2024-12-13 03:48:40.372477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.231 [2024-12-13 03:48:40.372671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.231 [2024-12-13 03:48:40.372865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.231 [2024-12-13 03:48:40.372877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.231 [2024-12-13 03:48:40.372885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.231 [2024-12-13 03:48:40.372895] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.231 [2024-12-13 03:48:40.385483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.231 [2024-12-13 03:48:40.385818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.231 [2024-12-13 03:48:40.385839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.231 [2024-12-13 03:48:40.385849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.231 [2024-12-13 03:48:40.386049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.231 [2024-12-13 03:48:40.386252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.231 [2024-12-13 03:48:40.386263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.232 [2024-12-13 03:48:40.386273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.232 [2024-12-13 03:48:40.386282] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.232 [2024-12-13 03:48:40.398764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.232 [2024-12-13 03:48:40.399221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.232 [2024-12-13 03:48:40.399279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.232 [2024-12-13 03:48:40.399311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.232 [2024-12-13 03:48:40.399833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.232 [2024-12-13 03:48:40.400026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.232 [2024-12-13 03:48:40.400038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.232 [2024-12-13 03:48:40.400047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.232 [2024-12-13 03:48:40.400056] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.232 [2024-12-13 03:48:40.412023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.232 [2024-12-13 03:48:40.412498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.232 [2024-12-13 03:48:40.412519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.232 [2024-12-13 03:48:40.412529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.232 [2024-12-13 03:48:40.412721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.232 [2024-12-13 03:48:40.412910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.232 [2024-12-13 03:48:40.412927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.232 [2024-12-13 03:48:40.412937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.232 [2024-12-13 03:48:40.412963] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.232 [2024-12-13 03:48:40.425198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.232 [2024-12-13 03:48:40.425602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.232 [2024-12-13 03:48:40.425623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.232 [2024-12-13 03:48:40.425634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.232 [2024-12-13 03:48:40.425822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.232 [2024-12-13 03:48:40.426018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.232 [2024-12-13 03:48:40.426030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.232 [2024-12-13 03:48:40.426043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.232 [2024-12-13 03:48:40.426052] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.492 [2024-12-13 03:48:40.438526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.492 [2024-12-13 03:48:40.439001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.492 [2024-12-13 03:48:40.439023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.492 [2024-12-13 03:48:40.439034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.492 [2024-12-13 03:48:40.439229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.492 [2024-12-13 03:48:40.439423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.492 [2024-12-13 03:48:40.439435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.492 [2024-12-13 03:48:40.439444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.492 [2024-12-13 03:48:40.439453] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.492 [2024-12-13 03:48:40.451839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.492 [2024-12-13 03:48:40.452234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.492 [2024-12-13 03:48:40.452256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.492 [2024-12-13 03:48:40.452266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.492 [2024-12-13 03:48:40.452454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.492 [2024-12-13 03:48:40.452643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.492 [2024-12-13 03:48:40.452654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.492 [2024-12-13 03:48:40.452663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.492 [2024-12-13 03:48:40.452672] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.492 [2024-12-13 03:48:40.465232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.492 [2024-12-13 03:48:40.465675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.492 [2024-12-13 03:48:40.465696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.492 [2024-12-13 03:48:40.465706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.492 [2024-12-13 03:48:40.465893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.492 [2024-12-13 03:48:40.466089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.492 [2024-12-13 03:48:40.466101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.492 [2024-12-13 03:48:40.466109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.492 [2024-12-13 03:48:40.466118] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.492 [2024-12-13 03:48:40.478554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.492 [2024-12-13 03:48:40.479002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.492 [2024-12-13 03:48:40.479024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.492 [2024-12-13 03:48:40.479035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.492 [2024-12-13 03:48:40.479229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.492 [2024-12-13 03:48:40.479424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.493 [2024-12-13 03:48:40.479436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.493 [2024-12-13 03:48:40.479445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.493 [2024-12-13 03:48:40.479454] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.493 [2024-12-13 03:48:40.491840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.493 [2024-12-13 03:48:40.492168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.493 [2024-12-13 03:48:40.492191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.493 [2024-12-13 03:48:40.492201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.493 [2024-12-13 03:48:40.492395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.493 [2024-12-13 03:48:40.492591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.493 [2024-12-13 03:48:40.492603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.493 [2024-12-13 03:48:40.492611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.493 [2024-12-13 03:48:40.492620] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.493 [2024-12-13 03:48:40.505421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.493 [2024-12-13 03:48:40.505926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.493 [2024-12-13 03:48:40.505950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.493 [2024-12-13 03:48:40.505961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.493 [2024-12-13 03:48:40.506167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.493 [2024-12-13 03:48:40.506373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.493 [2024-12-13 03:48:40.506386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.493 [2024-12-13 03:48:40.506396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.493 [2024-12-13 03:48:40.506405] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.493 [2024-12-13 03:48:40.518799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.493 [2024-12-13 03:48:40.519217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.493 [2024-12-13 03:48:40.519243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.493 [2024-12-13 03:48:40.519253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.493 [2024-12-13 03:48:40.519458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.493 [2024-12-13 03:48:40.519664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.493 [2024-12-13 03:48:40.519701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.493 [2024-12-13 03:48:40.519711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.493 [2024-12-13 03:48:40.519721] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.493 [2024-12-13 03:48:40.532449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.493 [2024-12-13 03:48:40.532860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.493 [2024-12-13 03:48:40.532883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.493 [2024-12-13 03:48:40.532894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.493 [2024-12-13 03:48:40.533107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.493 [2024-12-13 03:48:40.533315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.493 [2024-12-13 03:48:40.533327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.493 [2024-12-13 03:48:40.533337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.493 [2024-12-13 03:48:40.533346] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.493 [2024-12-13 03:48:40.545887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.493 [2024-12-13 03:48:40.546222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.493 [2024-12-13 03:48:40.546244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.493 [2024-12-13 03:48:40.546254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.493 [2024-12-13 03:48:40.546448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.493 [2024-12-13 03:48:40.546642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.493 [2024-12-13 03:48:40.546653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.493 [2024-12-13 03:48:40.546662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.493 [2024-12-13 03:48:40.546671] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.493 [2024-12-13 03:48:40.559276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.493 [2024-12-13 03:48:40.559766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.493 [2024-12-13 03:48:40.559787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.493 [2024-12-13 03:48:40.559797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.493 [2024-12-13 03:48:40.560000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.493 [2024-12-13 03:48:40.560194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.493 [2024-12-13 03:48:40.560206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.493 [2024-12-13 03:48:40.560215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.493 [2024-12-13 03:48:40.560225] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.493 [2024-12-13 03:48:40.572762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.493 [2024-12-13 03:48:40.573236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.493 [2024-12-13 03:48:40.573259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.493 [2024-12-13 03:48:40.573270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.493 [2024-12-13 03:48:40.573476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.493 [2024-12-13 03:48:40.573682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.493 [2024-12-13 03:48:40.573694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.493 [2024-12-13 03:48:40.573704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.493 [2024-12-13 03:48:40.573713] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.493 [2024-12-13 03:48:40.586405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.493 [2024-12-13 03:48:40.586904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.493 [2024-12-13 03:48:40.586934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.493 [2024-12-13 03:48:40.586946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.493 [2024-12-13 03:48:40.587151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.493 [2024-12-13 03:48:40.587357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.493 [2024-12-13 03:48:40.587370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.493 [2024-12-13 03:48:40.587379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.493 [2024-12-13 03:48:40.587389] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.493 [2024-12-13 03:48:40.599788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.493 [2024-12-13 03:48:40.600194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.493 [2024-12-13 03:48:40.600215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.493 [2024-12-13 03:48:40.600225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.493 [2024-12-13 03:48:40.600418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.493 [2024-12-13 03:48:40.600613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.493 [2024-12-13 03:48:40.600628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.493 [2024-12-13 03:48:40.600637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.494 [2024-12-13 03:48:40.600647] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.494 [2024-12-13 03:48:40.613104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.494 [2024-12-13 03:48:40.613557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.494 [2024-12-13 03:48:40.613609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.494 [2024-12-13 03:48:40.613645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.494 [2024-12-13 03:48:40.614205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.494 [2024-12-13 03:48:40.614400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.494 [2024-12-13 03:48:40.614412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.494 [2024-12-13 03:48:40.614421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.494 [2024-12-13 03:48:40.614430] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.494 [2024-12-13 03:48:40.626151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.494 [2024-12-13 03:48:40.626578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.494 [2024-12-13 03:48:40.626598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.494 [2024-12-13 03:48:40.626608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.494 [2024-12-13 03:48:40.626786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.494 [2024-12-13 03:48:40.626990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.494 [2024-12-13 03:48:40.627001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.494 [2024-12-13 03:48:40.627010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.494 [2024-12-13 03:48:40.627019] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.494 [2024-12-13 03:48:40.639332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.494 [2024-12-13 03:48:40.639753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.494 [2024-12-13 03:48:40.639773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.494 [2024-12-13 03:48:40.639784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.494 [2024-12-13 03:48:40.639979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.494 [2024-12-13 03:48:40.640169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.494 [2024-12-13 03:48:40.640180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.494 [2024-12-13 03:48:40.640189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.494 [2024-12-13 03:48:40.640201] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.494 7344.33 IOPS, 28.69 MiB/s [2024-12-13T02:48:40.703Z] [2024-12-13 03:48:40.652538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.494 [2024-12-13 03:48:40.652962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.494 [2024-12-13 03:48:40.652984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.494 [2024-12-13 03:48:40.652995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.494 [2024-12-13 03:48:40.653189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.494 [2024-12-13 03:48:40.653384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.494 [2024-12-13 03:48:40.653396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.494 [2024-12-13 03:48:40.653405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.494 [2024-12-13 03:48:40.653414] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.494 [2024-12-13 03:48:40.665680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.494 [2024-12-13 03:48:40.666130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.494 [2024-12-13 03:48:40.666152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.494 [2024-12-13 03:48:40.666162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.494 [2024-12-13 03:48:40.666350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.494 [2024-12-13 03:48:40.666539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.494 [2024-12-13 03:48:40.666550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.494 [2024-12-13 03:48:40.666559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.494 [2024-12-13 03:48:40.666568] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.494 [2024-12-13 03:48:40.678819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.494 [2024-12-13 03:48:40.679208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.494 [2024-12-13 03:48:40.679230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.494 [2024-12-13 03:48:40.679240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.494 [2024-12-13 03:48:40.679428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.494 [2024-12-13 03:48:40.679616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.494 [2024-12-13 03:48:40.679627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.494 [2024-12-13 03:48:40.679636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.494 [2024-12-13 03:48:40.679645] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.494 [2024-12-13 03:48:40.692062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.494 [2024-12-13 03:48:40.692460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.494 [2024-12-13 03:48:40.692520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.494 [2024-12-13 03:48:40.692552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.494 [2024-12-13 03:48:40.693216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.494 [2024-12-13 03:48:40.693631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.494 [2024-12-13 03:48:40.693643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.494 [2024-12-13 03:48:40.693652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.494 [2024-12-13 03:48:40.693661] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.754 [2024-12-13 03:48:40.705524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.754 [2024-12-13 03:48:40.705964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.754 [2024-12-13 03:48:40.705987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.754 [2024-12-13 03:48:40.705997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.755 [2024-12-13 03:48:40.706191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.755 [2024-12-13 03:48:40.706384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.755 [2024-12-13 03:48:40.706396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.755 [2024-12-13 03:48:40.706405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.755 [2024-12-13 03:48:40.706420] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.755 [2024-12-13 03:48:40.718820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.755 [2024-12-13 03:48:40.719284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-12-13 03:48:40.719306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.755 [2024-12-13 03:48:40.719316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.755 [2024-12-13 03:48:40.719511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.755 [2024-12-13 03:48:40.719704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.755 [2024-12-13 03:48:40.719716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.755 [2024-12-13 03:48:40.719725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.755 [2024-12-13 03:48:40.719734] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.755 [2024-12-13 03:48:40.731914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.755 [2024-12-13 03:48:40.732354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-12-13 03:48:40.732411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.755 [2024-12-13 03:48:40.732451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.755 [2024-12-13 03:48:40.733119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.755 [2024-12-13 03:48:40.733597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.755 [2024-12-13 03:48:40.733608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.755 [2024-12-13 03:48:40.733617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.755 [2024-12-13 03:48:40.733626] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.755 [2024-12-13 03:48:40.745071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.755 [2024-12-13 03:48:40.745489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-12-13 03:48:40.745509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.755 [2024-12-13 03:48:40.745519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.755 [2024-12-13 03:48:40.745698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.755 [2024-12-13 03:48:40.745877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.755 [2024-12-13 03:48:40.745887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.755 [2024-12-13 03:48:40.745896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.755 [2024-12-13 03:48:40.745904] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.755 [2024-12-13 03:48:40.758232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.755 [2024-12-13 03:48:40.758693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-12-13 03:48:40.758752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.755 [2024-12-13 03:48:40.758784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.755 [2024-12-13 03:48:40.759291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.755 [2024-12-13 03:48:40.759604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.755 [2024-12-13 03:48:40.759622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.755 [2024-12-13 03:48:40.759636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.755 [2024-12-13 03:48:40.759650] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.755 [2024-12-13 03:48:40.772141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.755 [2024-12-13 03:48:40.772506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-12-13 03:48:40.772529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.755 [2024-12-13 03:48:40.772539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.755 [2024-12-13 03:48:40.772749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.755 [2024-12-13 03:48:40.772965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.755 [2024-12-13 03:48:40.772977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.755 [2024-12-13 03:48:40.772987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.755 [2024-12-13 03:48:40.772996] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.755 [2024-12-13 03:48:40.785249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.755 [2024-12-13 03:48:40.785668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-12-13 03:48:40.785689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.755 [2024-12-13 03:48:40.785698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.755 [2024-12-13 03:48:40.785876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.755 [2024-12-13 03:48:40.786083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.755 [2024-12-13 03:48:40.786094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.755 [2024-12-13 03:48:40.786103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.755 [2024-12-13 03:48:40.786112] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.755 [2024-12-13 03:48:40.798265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.755 [2024-12-13 03:48:40.798698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-12-13 03:48:40.798758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.755 [2024-12-13 03:48:40.798789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.755 [2024-12-13 03:48:40.799456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.755 [2024-12-13 03:48:40.799966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.755 [2024-12-13 03:48:40.799984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.755 [2024-12-13 03:48:40.799998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.755 [2024-12-13 03:48:40.800012] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.755 [2024-12-13 03:48:40.812246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.755 [2024-12-13 03:48:40.812690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-12-13 03:48:40.812712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.755 [2024-12-13 03:48:40.812723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.755 [2024-12-13 03:48:40.812936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.755 [2024-12-13 03:48:40.813144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.755 [2024-12-13 03:48:40.813160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.755 [2024-12-13 03:48:40.813170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.755 [2024-12-13 03:48:40.813179] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.755 [2024-12-13 03:48:40.825283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.755 [2024-12-13 03:48:40.825615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.755 [2024-12-13 03:48:40.825646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.755 [2024-12-13 03:48:40.825656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.755 [2024-12-13 03:48:40.825833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.755 [2024-12-13 03:48:40.826038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.755 [2024-12-13 03:48:40.826049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.755 [2024-12-13 03:48:40.826058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.755 [2024-12-13 03:48:40.826067] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.755 [2024-12-13 03:48:40.838358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.756 [2024-12-13 03:48:40.838779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-12-13 03:48:40.838799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.756 [2024-12-13 03:48:40.838808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.756 [2024-12-13 03:48:40.839013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.756 [2024-12-13 03:48:40.839201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.756 [2024-12-13 03:48:40.839212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.756 [2024-12-13 03:48:40.839221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.756 [2024-12-13 03:48:40.839230] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.756 [2024-12-13 03:48:40.851442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.756 [2024-12-13 03:48:40.851884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-12-13 03:48:40.851905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.756 [2024-12-13 03:48:40.851915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.756 [2024-12-13 03:48:40.852110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.756 [2024-12-13 03:48:40.852298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.756 [2024-12-13 03:48:40.852309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.756 [2024-12-13 03:48:40.852318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.756 [2024-12-13 03:48:40.852329] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.756 [2024-12-13 03:48:40.864494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.756 [2024-12-13 03:48:40.864948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-12-13 03:48:40.865008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.756 [2024-12-13 03:48:40.865040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.756 [2024-12-13 03:48:40.865689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.756 [2024-12-13 03:48:40.866064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.756 [2024-12-13 03:48:40.866076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.756 [2024-12-13 03:48:40.866084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.756 [2024-12-13 03:48:40.866093] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.756 [2024-12-13 03:48:40.877541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.756 [2024-12-13 03:48:40.877979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-12-13 03:48:40.878040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.756 [2024-12-13 03:48:40.878072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.756 [2024-12-13 03:48:40.878724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.756 [2024-12-13 03:48:40.878913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.756 [2024-12-13 03:48:40.878930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.756 [2024-12-13 03:48:40.878939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.756 [2024-12-13 03:48:40.878948] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.756 [2024-12-13 03:48:40.890576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.756 [2024-12-13 03:48:40.890995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-12-13 03:48:40.891015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.756 [2024-12-13 03:48:40.891024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.756 [2024-12-13 03:48:40.891202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.756 [2024-12-13 03:48:40.891381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.756 [2024-12-13 03:48:40.891391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.756 [2024-12-13 03:48:40.891400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.756 [2024-12-13 03:48:40.891408] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.756 [2024-12-13 03:48:40.903642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.756 [2024-12-13 03:48:40.904103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-12-13 03:48:40.904123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.756 [2024-12-13 03:48:40.904133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.756 [2024-12-13 03:48:40.904311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.756 [2024-12-13 03:48:40.904489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.756 [2024-12-13 03:48:40.904500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.756 [2024-12-13 03:48:40.904508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.756 [2024-12-13 03:48:40.904517] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.756 [2024-12-13 03:48:40.916678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.756 [2024-12-13 03:48:40.917127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-12-13 03:48:40.917149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.756 [2024-12-13 03:48:40.917159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.756 [2024-12-13 03:48:40.917347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.756 [2024-12-13 03:48:40.917536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.756 [2024-12-13 03:48:40.917547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.756 [2024-12-13 03:48:40.917563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.756 [2024-12-13 03:48:40.917572] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.756 [2024-12-13 03:48:40.929796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.756 [2024-12-13 03:48:40.930176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-12-13 03:48:40.930197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.756 [2024-12-13 03:48:40.930207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.756 [2024-12-13 03:48:40.930395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.756 [2024-12-13 03:48:40.930582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.756 [2024-12-13 03:48:40.930594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.756 [2024-12-13 03:48:40.930603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.756 [2024-12-13 03:48:40.930611] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.756 [2024-12-13 03:48:40.942894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.756 [2024-12-13 03:48:40.943329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-12-13 03:48:40.943351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.756 [2024-12-13 03:48:40.943365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.756 [2024-12-13 03:48:40.943559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.756 [2024-12-13 03:48:40.943753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.756 [2024-12-13 03:48:40.943765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.756 [2024-12-13 03:48:40.943774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.756 [2024-12-13 03:48:40.943784] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:39.756 [2024-12-13 03:48:40.956277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:39.756 [2024-12-13 03:48:40.956723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.756 [2024-12-13 03:48:40.956744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:39.756 [2024-12-13 03:48:40.956754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:39.756 [2024-12-13 03:48:40.956953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:39.756 [2024-12-13 03:48:40.957148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:39.756 [2024-12-13 03:48:40.957159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:39.756 [2024-12-13 03:48:40.957168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:39.756 [2024-12-13 03:48:40.957177] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.016 [2024-12-13 03:48:40.969690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.016 [2024-12-13 03:48:40.970126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.016 [2024-12-13 03:48:40.970147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.016 [2024-12-13 03:48:40.970157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.016 [2024-12-13 03:48:40.970346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.016 [2024-12-13 03:48:40.970534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.016 [2024-12-13 03:48:40.970545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.016 [2024-12-13 03:48:40.970554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.016 [2024-12-13 03:48:40.970562] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.016 [2024-12-13 03:48:40.982780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.016 [2024-12-13 03:48:40.983233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.016 [2024-12-13 03:48:40.983290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.016 [2024-12-13 03:48:40.983322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.016 [2024-12-13 03:48:40.983846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.016 [2024-12-13 03:48:40.984042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.016 [2024-12-13 03:48:40.984054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.016 [2024-12-13 03:48:40.984063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.016 [2024-12-13 03:48:40.984072] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.016 [2024-12-13 03:48:40.995864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.016 [2024-12-13 03:48:40.996282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.016 [2024-12-13 03:48:40.996302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.016 [2024-12-13 03:48:40.996312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.016 [2024-12-13 03:48:40.996491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.016 [2024-12-13 03:48:40.996670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.017 [2024-12-13 03:48:40.996680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.017 [2024-12-13 03:48:40.996689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.017 [2024-12-13 03:48:40.996697] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.017 [2024-12-13 03:48:41.008981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.017 [2024-12-13 03:48:41.009412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.017 [2024-12-13 03:48:41.009469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.017 [2024-12-13 03:48:41.009500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.017 [2024-12-13 03:48:41.010018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.017 [2024-12-13 03:48:41.010207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.017 [2024-12-13 03:48:41.010219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.017 [2024-12-13 03:48:41.010228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.017 [2024-12-13 03:48:41.010236] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.017 [2024-12-13 03:48:41.022039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.017 [2024-12-13 03:48:41.022456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.017 [2024-12-13 03:48:41.022476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.017 [2024-12-13 03:48:41.022486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.017 [2024-12-13 03:48:41.022663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.017 [2024-12-13 03:48:41.022846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.017 [2024-12-13 03:48:41.022856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.017 [2024-12-13 03:48:41.022869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.017 [2024-12-13 03:48:41.022878] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.017 [2024-12-13 03:48:41.035134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.017 [2024-12-13 03:48:41.035557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.017 [2024-12-13 03:48:41.035577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.017 [2024-12-13 03:48:41.035588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.017 [2024-12-13 03:48:41.035775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.017 [2024-12-13 03:48:41.035970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.017 [2024-12-13 03:48:41.035982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.017 [2024-12-13 03:48:41.035991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.017 [2024-12-13 03:48:41.036000] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.017 [2024-12-13 03:48:41.048224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.017 [2024-12-13 03:48:41.048674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.017 [2024-12-13 03:48:41.048696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.017 [2024-12-13 03:48:41.048706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.017 [2024-12-13 03:48:41.048894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.017 [2024-12-13 03:48:41.049088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.017 [2024-12-13 03:48:41.049100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.017 [2024-12-13 03:48:41.049109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.017 [2024-12-13 03:48:41.049118] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.017 [2024-12-13 03:48:41.061334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.017 [2024-12-13 03:48:41.061802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.017 [2024-12-13 03:48:41.061861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.017 [2024-12-13 03:48:41.061893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.017 [2024-12-13 03:48:41.062446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.017 [2024-12-13 03:48:41.062759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.017 [2024-12-13 03:48:41.062776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.017 [2024-12-13 03:48:41.062791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.017 [2024-12-13 03:48:41.062804] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.017 [2024-12-13 03:48:41.075659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.017 [2024-12-13 03:48:41.076126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.017 [2024-12-13 03:48:41.076148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.017 [2024-12-13 03:48:41.076159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.017 [2024-12-13 03:48:41.076364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.017 [2024-12-13 03:48:41.076570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.017 [2024-12-13 03:48:41.076582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.017 [2024-12-13 03:48:41.076592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.017 [2024-12-13 03:48:41.076602] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.017 [2024-12-13 03:48:41.088755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.017 [2024-12-13 03:48:41.089202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.017 [2024-12-13 03:48:41.089223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.017 [2024-12-13 03:48:41.089240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.017 [2024-12-13 03:48:41.089428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.017 [2024-12-13 03:48:41.089617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.017 [2024-12-13 03:48:41.089628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.017 [2024-12-13 03:48:41.089636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.017 [2024-12-13 03:48:41.089645] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.017 [2024-12-13 03:48:41.101804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.017 [2024-12-13 03:48:41.102253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.017 [2024-12-13 03:48:41.102274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.017 [2024-12-13 03:48:41.102285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.017 [2024-12-13 03:48:41.102473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.017 [2024-12-13 03:48:41.102662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.017 [2024-12-13 03:48:41.102673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.017 [2024-12-13 03:48:41.102682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.017 [2024-12-13 03:48:41.102690] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.017 [2024-12-13 03:48:41.114918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.017 [2024-12-13 03:48:41.115344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.017 [2024-12-13 03:48:41.115367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.017 [2024-12-13 03:48:41.115377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.017 [2024-12-13 03:48:41.115556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.017 [2024-12-13 03:48:41.115734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.017 [2024-12-13 03:48:41.115744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.017 [2024-12-13 03:48:41.115753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.017 [2024-12-13 03:48:41.115761] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.017 [2024-12-13 03:48:41.127987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.017 [2024-12-13 03:48:41.128369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.017 [2024-12-13 03:48:41.128390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.017 [2024-12-13 03:48:41.128400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.017 [2024-12-13 03:48:41.128588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.017 [2024-12-13 03:48:41.128776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.018 [2024-12-13 03:48:41.128787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.018 [2024-12-13 03:48:41.128796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.018 [2024-12-13 03:48:41.128805] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.018 [2024-12-13 03:48:41.141037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.018 [2024-12-13 03:48:41.141435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.018 [2024-12-13 03:48:41.141455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.018 [2024-12-13 03:48:41.141464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.018 [2024-12-13 03:48:41.141643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.018 [2024-12-13 03:48:41.141821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.018 [2024-12-13 03:48:41.141831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.018 [2024-12-13 03:48:41.141840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.018 [2024-12-13 03:48:41.141848] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.018 [2024-12-13 03:48:41.154071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.018 [2024-12-13 03:48:41.154499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.018 [2024-12-13 03:48:41.154519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.018 [2024-12-13 03:48:41.154529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.018 [2024-12-13 03:48:41.154710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.018 [2024-12-13 03:48:41.154888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.018 [2024-12-13 03:48:41.154899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.018 [2024-12-13 03:48:41.154907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.018 [2024-12-13 03:48:41.154922] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.018 [2024-12-13 03:48:41.167184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.018 [2024-12-13 03:48:41.167601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.018 [2024-12-13 03:48:41.167620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.018 [2024-12-13 03:48:41.167629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.018 [2024-12-13 03:48:41.167808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.018 [2024-12-13 03:48:41.168010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.018 [2024-12-13 03:48:41.168022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.018 [2024-12-13 03:48:41.168030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.018 [2024-12-13 03:48:41.168039] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.018 [2024-12-13 03:48:41.180303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.018 [2024-12-13 03:48:41.180736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.018 [2024-12-13 03:48:41.180756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.018 [2024-12-13 03:48:41.180765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.018 [2024-12-13 03:48:41.180965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.018 [2024-12-13 03:48:41.181154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.018 [2024-12-13 03:48:41.181165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.018 [2024-12-13 03:48:41.181174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.018 [2024-12-13 03:48:41.181183] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.018 [2024-12-13 03:48:41.193465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.018 [2024-12-13 03:48:41.193933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.018 [2024-12-13 03:48:41.193994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.018 [2024-12-13 03:48:41.194027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.018 [2024-12-13 03:48:41.194468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.018 [2024-12-13 03:48:41.194663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.018 [2024-12-13 03:48:41.194677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.018 [2024-12-13 03:48:41.194686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.018 [2024-12-13 03:48:41.194696] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.018 [2024-12-13 03:48:41.206787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.018 [2024-12-13 03:48:41.207243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.018 [2024-12-13 03:48:41.207263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.018 [2024-12-13 03:48:41.207273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.018 [2024-12-13 03:48:41.207461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.018 [2024-12-13 03:48:41.207649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.018 [2024-12-13 03:48:41.207660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.018 [2024-12-13 03:48:41.207669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.018 [2024-12-13 03:48:41.207678] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.018 [2024-12-13 03:48:41.220034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.018 [2024-12-13 03:48:41.220474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.018 [2024-12-13 03:48:41.220495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.018 [2024-12-13 03:48:41.220505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.018 [2024-12-13 03:48:41.220698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.018 [2024-12-13 03:48:41.220892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.018 [2024-12-13 03:48:41.220904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.018 [2024-12-13 03:48:41.220913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.018 [2024-12-13 03:48:41.220928] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.278 [2024-12-13 03:48:41.233226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.278 [2024-12-13 03:48:41.233644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.278 [2024-12-13 03:48:41.233665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.278 [2024-12-13 03:48:41.233676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.278 [2024-12-13 03:48:41.233865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.278 [2024-12-13 03:48:41.234079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.278 [2024-12-13 03:48:41.234091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.278 [2024-12-13 03:48:41.234104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.278 [2024-12-13 03:48:41.234113] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.278 [2024-12-13 03:48:41.246260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.278 [2024-12-13 03:48:41.246675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.278 [2024-12-13 03:48:41.246696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.278 [2024-12-13 03:48:41.246705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.278 [2024-12-13 03:48:41.246883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.278 [2024-12-13 03:48:41.247090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.278 [2024-12-13 03:48:41.247101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.278 [2024-12-13 03:48:41.247110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.278 [2024-12-13 03:48:41.247119] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.278 [2024-12-13 03:48:41.259338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.278 [2024-12-13 03:48:41.259783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.278 [2024-12-13 03:48:41.259842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.278 [2024-12-13 03:48:41.259874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.279 [2024-12-13 03:48:41.260541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.279 [2024-12-13 03:48:41.260730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.279 [2024-12-13 03:48:41.260742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.279 [2024-12-13 03:48:41.260750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.279 [2024-12-13 03:48:41.260760] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.279 [2024-12-13 03:48:41.272479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.279 [2024-12-13 03:48:41.272857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.279 [2024-12-13 03:48:41.272878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.279 [2024-12-13 03:48:41.272888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.279 [2024-12-13 03:48:41.273082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.279 [2024-12-13 03:48:41.273279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.279 [2024-12-13 03:48:41.273290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.279 [2024-12-13 03:48:41.273299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.279 [2024-12-13 03:48:41.273308] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.279 [2024-12-13 03:48:41.285591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.279 [2024-12-13 03:48:41.286021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.279 [2024-12-13 03:48:41.286043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.279 [2024-12-13 03:48:41.286053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.279 [2024-12-13 03:48:41.286233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.279 [2024-12-13 03:48:41.286411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.279 [2024-12-13 03:48:41.286422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.279 [2024-12-13 03:48:41.286430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.279 [2024-12-13 03:48:41.286439] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.279 [2024-12-13 03:48:41.298671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.279 [2024-12-13 03:48:41.299032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.279 [2024-12-13 03:48:41.299054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.279 [2024-12-13 03:48:41.299063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.279 [2024-12-13 03:48:41.299242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.279 [2024-12-13 03:48:41.299421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.279 [2024-12-13 03:48:41.299431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.279 [2024-12-13 03:48:41.299439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.279 [2024-12-13 03:48:41.299448] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.279 [2024-12-13 03:48:41.311884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.279 [2024-12-13 03:48:41.312336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.279 [2024-12-13 03:48:41.312395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.279 [2024-12-13 03:48:41.312428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.279 [2024-12-13 03:48:41.312947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.279 [2024-12-13 03:48:41.313137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.279 [2024-12-13 03:48:41.313149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.279 [2024-12-13 03:48:41.313159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.279 [2024-12-13 03:48:41.313167] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.279 [2024-12-13 03:48:41.325027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.279 [2024-12-13 03:48:41.325502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.279 [2024-12-13 03:48:41.325525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.279 [2024-12-13 03:48:41.325538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.279 [2024-12-13 03:48:41.325727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.279 [2024-12-13 03:48:41.325915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.279 [2024-12-13 03:48:41.325933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.279 [2024-12-13 03:48:41.325942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.279 [2024-12-13 03:48:41.325951] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.279 [2024-12-13 03:48:41.338163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.279 [2024-12-13 03:48:41.338607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.279 [2024-12-13 03:48:41.338633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.279 [2024-12-13 03:48:41.338644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.279 [2024-12-13 03:48:41.338832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.279 [2024-12-13 03:48:41.339027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.279 [2024-12-13 03:48:41.339039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.279 [2024-12-13 03:48:41.339048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.279 [2024-12-13 03:48:41.339057] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.279 [2024-12-13 03:48:41.351475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.279 [2024-12-13 03:48:41.351911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.279 [2024-12-13 03:48:41.351936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.279 [2024-12-13 03:48:41.351946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.279 [2024-12-13 03:48:41.352134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.279 [2024-12-13 03:48:41.352323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.279 [2024-12-13 03:48:41.352334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.279 [2024-12-13 03:48:41.352342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.279 [2024-12-13 03:48:41.352351] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.279 [2024-12-13 03:48:41.364719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.279 [2024-12-13 03:48:41.365178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.279 [2024-12-13 03:48:41.365200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.279 [2024-12-13 03:48:41.365210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.279 [2024-12-13 03:48:41.365408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.279 [2024-12-13 03:48:41.365601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.279 [2024-12-13 03:48:41.365612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.279 [2024-12-13 03:48:41.365621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.279 [2024-12-13 03:48:41.365630] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.279 [2024-12-13 03:48:41.378031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.279 [2024-12-13 03:48:41.378462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.279 [2024-12-13 03:48:41.378483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.279 [2024-12-13 03:48:41.378493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.279 [2024-12-13 03:48:41.378681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.279 [2024-12-13 03:48:41.378869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.279 [2024-12-13 03:48:41.378880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.279 [2024-12-13 03:48:41.378889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.279 [2024-12-13 03:48:41.378899] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.279 [2024-12-13 03:48:41.391296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.279 [2024-12-13 03:48:41.391743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.279 [2024-12-13 03:48:41.391764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.279 [2024-12-13 03:48:41.391774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.279 [2024-12-13 03:48:41.391968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.280 [2024-12-13 03:48:41.392157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.280 [2024-12-13 03:48:41.392168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.280 [2024-12-13 03:48:41.392177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.280 [2024-12-13 03:48:41.392186] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.280 [2024-12-13 03:48:41.404328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.280 [2024-12-13 03:48:41.404778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.280 [2024-12-13 03:48:41.404841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.280 [2024-12-13 03:48:41.404874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.280 [2024-12-13 03:48:41.405539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.280 [2024-12-13 03:48:41.406129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.280 [2024-12-13 03:48:41.406144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.280 [2024-12-13 03:48:41.406153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.280 [2024-12-13 03:48:41.406162] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.280 [2024-12-13 03:48:41.418769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.280 [2024-12-13 03:48:41.419231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.280 [2024-12-13 03:48:41.419289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.280 [2024-12-13 03:48:41.419322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.280 [2024-12-13 03:48:41.419987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.280 [2024-12-13 03:48:41.420387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.280 [2024-12-13 03:48:41.420399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.280 [2024-12-13 03:48:41.420408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.280 [2024-12-13 03:48:41.420418] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.280 [2024-12-13 03:48:41.431823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.280 [2024-12-13 03:48:41.432274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.280 [2024-12-13 03:48:41.432296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.280 [2024-12-13 03:48:41.432306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.280 [2024-12-13 03:48:41.432494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.280 [2024-12-13 03:48:41.432682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.280 [2024-12-13 03:48:41.432693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.280 [2024-12-13 03:48:41.432702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.280 [2024-12-13 03:48:41.432711] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.280 [2024-12-13 03:48:41.444872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.280 [2024-12-13 03:48:41.445288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.280 [2024-12-13 03:48:41.445310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.280 [2024-12-13 03:48:41.445320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.280 [2024-12-13 03:48:41.445509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.280 [2024-12-13 03:48:41.445698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.280 [2024-12-13 03:48:41.445710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.280 [2024-12-13 03:48:41.445719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.280 [2024-12-13 03:48:41.445732] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.280 [2024-12-13 03:48:41.458323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.280 [2024-12-13 03:48:41.458688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.280 [2024-12-13 03:48:41.458709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.280 [2024-12-13 03:48:41.458719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.280 [2024-12-13 03:48:41.458913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.280 [2024-12-13 03:48:41.459112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.280 [2024-12-13 03:48:41.459124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.280 [2024-12-13 03:48:41.459139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.280 [2024-12-13 03:48:41.459148] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.280 [2024-12-13 03:48:41.471455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.280 [2024-12-13 03:48:41.471911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.280 [2024-12-13 03:48:41.471984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.280 [2024-12-13 03:48:41.472017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.280 [2024-12-13 03:48:41.472456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.280 [2024-12-13 03:48:41.472644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.280 [2024-12-13 03:48:41.472655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.280 [2024-12-13 03:48:41.472663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.280 [2024-12-13 03:48:41.472672] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.280 [2024-12-13 03:48:41.484824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.280 [2024-12-13 03:48:41.485289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.280 [2024-12-13 03:48:41.485310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.280 [2024-12-13 03:48:41.485320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.280 [2024-12-13 03:48:41.485514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.541 [2024-12-13 03:48:41.485712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.541 [2024-12-13 03:48:41.485724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.541 [2024-12-13 03:48:41.485733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.541 [2024-12-13 03:48:41.485742] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.541 [2024-12-13 03:48:41.497988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.541 [2024-12-13 03:48:41.498419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-13 03:48:41.498474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.541 [2024-12-13 03:48:41.498508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.541 [2024-12-13 03:48:41.499174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.541 [2024-12-13 03:48:41.499619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.541 [2024-12-13 03:48:41.499630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.541 [2024-12-13 03:48:41.499639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.541 [2024-12-13 03:48:41.499647] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.541 [2024-12-13 03:48:41.511058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.541 [2024-12-13 03:48:41.511466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-13 03:48:41.511486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.541 [2024-12-13 03:48:41.511495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.541 [2024-12-13 03:48:41.511674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.541 [2024-12-13 03:48:41.511852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.541 [2024-12-13 03:48:41.511863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.541 [2024-12-13 03:48:41.511871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.541 [2024-12-13 03:48:41.511879] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.541 [2024-12-13 03:48:41.524141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.541 [2024-12-13 03:48:41.524579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-13 03:48:41.524637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.541 [2024-12-13 03:48:41.524669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.541 [2024-12-13 03:48:41.525193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.541 [2024-12-13 03:48:41.525384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.541 [2024-12-13 03:48:41.525397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.541 [2024-12-13 03:48:41.525405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.541 [2024-12-13 03:48:41.525414] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.541 [2024-12-13 03:48:41.537371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.541 [2024-12-13 03:48:41.537690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-13 03:48:41.537711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.541 [2024-12-13 03:48:41.537725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.541 [2024-12-13 03:48:41.537912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.541 [2024-12-13 03:48:41.538108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.541 [2024-12-13 03:48:41.538119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.541 [2024-12-13 03:48:41.538127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.541 [2024-12-13 03:48:41.538137] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.541 [2024-12-13 03:48:41.550591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.541 [2024-12-13 03:48:41.550984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-13 03:48:41.551044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.541 [2024-12-13 03:48:41.551077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.541 [2024-12-13 03:48:41.551608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.541 [2024-12-13 03:48:41.551796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.541 [2024-12-13 03:48:41.551807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.541 [2024-12-13 03:48:41.551816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.541 [2024-12-13 03:48:41.551825] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.541 [2024-12-13 03:48:41.563801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.541 [2024-12-13 03:48:41.564250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-13 03:48:41.564271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.541 [2024-12-13 03:48:41.564281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.541 [2024-12-13 03:48:41.564468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.541 [2024-12-13 03:48:41.564656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.541 [2024-12-13 03:48:41.564667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.541 [2024-12-13 03:48:41.564676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.541 [2024-12-13 03:48:41.564685] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.541 [2024-12-13 03:48:41.576963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.541 [2024-12-13 03:48:41.577393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-13 03:48:41.577413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.541 [2024-12-13 03:48:41.577423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.541 [2024-12-13 03:48:41.577611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.541 [2024-12-13 03:48:41.577802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.541 [2024-12-13 03:48:41.577813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.541 [2024-12-13 03:48:41.577822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.541 [2024-12-13 03:48:41.577831] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.541 [2024-12-13 03:48:41.590275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.541 [2024-12-13 03:48:41.590721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-13 03:48:41.590742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.541 [2024-12-13 03:48:41.590752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.541 [2024-12-13 03:48:41.590950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.541 [2024-12-13 03:48:41.591145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.541 [2024-12-13 03:48:41.591156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.541 [2024-12-13 03:48:41.591165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.541 [2024-12-13 03:48:41.591174] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.541 [2024-12-13 03:48:41.603568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.541 [2024-12-13 03:48:41.604009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-13 03:48:41.604031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.541 [2024-12-13 03:48:41.604041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.541 [2024-12-13 03:48:41.604235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.541 [2024-12-13 03:48:41.604429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.541 [2024-12-13 03:48:41.604440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.541 [2024-12-13 03:48:41.604449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.541 [2024-12-13 03:48:41.604458] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.541 [2024-12-13 03:48:41.616848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.541 [2024-12-13 03:48:41.617287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.541 [2024-12-13 03:48:41.617308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.542 [2024-12-13 03:48:41.617318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.542 [2024-12-13 03:48:41.617512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.542 [2024-12-13 03:48:41.617706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.542 [2024-12-13 03:48:41.617717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.542 [2024-12-13 03:48:41.617733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.542 [2024-12-13 03:48:41.617741] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.542 [2024-12-13 03:48:41.630221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.542 [2024-12-13 03:48:41.630689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-13 03:48:41.630712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.542 [2024-12-13 03:48:41.630722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.542 [2024-12-13 03:48:41.630945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.542 [2024-12-13 03:48:41.631177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.542 [2024-12-13 03:48:41.631189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.542 [2024-12-13 03:48:41.631198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.542 [2024-12-13 03:48:41.631207] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.542 [2024-12-13 03:48:41.643613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.542 [2024-12-13 03:48:41.644052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-13 03:48:41.644074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.542 [2024-12-13 03:48:41.644084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.542 [2024-12-13 03:48:41.644278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.542 [2024-12-13 03:48:41.644472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.542 [2024-12-13 03:48:41.644483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.542 [2024-12-13 03:48:41.644493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.542 [2024-12-13 03:48:41.644502] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.542 5508.25 IOPS, 21.52 MiB/s [2024-12-13T02:48:41.751Z] [2024-12-13 03:48:41.656995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.542 [2024-12-13 03:48:41.657356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-13 03:48:41.657377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.542 [2024-12-13 03:48:41.657388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.542 [2024-12-13 03:48:41.657581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.542 [2024-12-13 03:48:41.657776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.542 [2024-12-13 03:48:41.657787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.542 [2024-12-13 03:48:41.657796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.542 [2024-12-13 03:48:41.657809] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.542 [2024-12-13 03:48:41.670552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.542 [2024-12-13 03:48:41.670983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-13 03:48:41.671006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.542 [2024-12-13 03:48:41.671017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.542 [2024-12-13 03:48:41.671222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.542 [2024-12-13 03:48:41.671428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.542 [2024-12-13 03:48:41.671440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.542 [2024-12-13 03:48:41.671450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.542 [2024-12-13 03:48:41.671459] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.542 [2024-12-13 03:48:41.684033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.542 [2024-12-13 03:48:41.684491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-13 03:48:41.684514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.542 [2024-12-13 03:48:41.684525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.542 [2024-12-13 03:48:41.684730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.542 [2024-12-13 03:48:41.684942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.542 [2024-12-13 03:48:41.684955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.542 [2024-12-13 03:48:41.684964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.542 [2024-12-13 03:48:41.684974] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.542 [2024-12-13 03:48:41.697637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.542 [2024-12-13 03:48:41.698117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-13 03:48:41.698140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.542 [2024-12-13 03:48:41.698150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.542 [2024-12-13 03:48:41.698344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.542 [2024-12-13 03:48:41.698538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.542 [2024-12-13 03:48:41.698550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.542 [2024-12-13 03:48:41.698560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.542 [2024-12-13 03:48:41.698570] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.542 [2024-12-13 03:48:41.711139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.542 [2024-12-13 03:48:41.711614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-13 03:48:41.711636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.542 [2024-12-13 03:48:41.711646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.542 [2024-12-13 03:48:41.711852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.542 [2024-12-13 03:48:41.712062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.542 [2024-12-13 03:48:41.712075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.542 [2024-12-13 03:48:41.712085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.542 [2024-12-13 03:48:41.712095] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.542 [2024-12-13 03:48:41.724633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.542 [2024-12-13 03:48:41.725085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-13 03:48:41.725110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.542 [2024-12-13 03:48:41.725121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.542 [2024-12-13 03:48:41.725328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.542 [2024-12-13 03:48:41.725535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.542 [2024-12-13 03:48:41.725546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.542 [2024-12-13 03:48:41.725556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.542 [2024-12-13 03:48:41.725566] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.542 [2024-12-13 03:48:41.737970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.542 [2024-12-13 03:48:41.738420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.542 [2024-12-13 03:48:41.738443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.542 [2024-12-13 03:48:41.738453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.542 [2024-12-13 03:48:41.738658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.542 [2024-12-13 03:48:41.738863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.542 [2024-12-13 03:48:41.738875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.542 [2024-12-13 03:48:41.738884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.542 [2024-12-13 03:48:41.738894] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.803 [2024-12-13 03:48:41.751580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.803 [2024-12-13 03:48:41.752033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.803 [2024-12-13 03:48:41.752057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.803 [2024-12-13 03:48:41.752071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.803 [2024-12-13 03:48:41.752278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.803 [2024-12-13 03:48:41.752484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.803 [2024-12-13 03:48:41.752496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.803 [2024-12-13 03:48:41.752505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.803 [2024-12-13 03:48:41.752515] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.803 [2024-12-13 03:48:41.765054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.803 [2024-12-13 03:48:41.765484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.803 [2024-12-13 03:48:41.765507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.803 [2024-12-13 03:48:41.765518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.803 [2024-12-13 03:48:41.765724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.803 [2024-12-13 03:48:41.765936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.803 [2024-12-13 03:48:41.765948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.803 [2024-12-13 03:48:41.765958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.803 [2024-12-13 03:48:41.765968] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.803 [2024-12-13 03:48:41.778546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.803 [2024-12-13 03:48:41.778974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.803 [2024-12-13 03:48:41.778997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.803 [2024-12-13 03:48:41.779008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.803 [2024-12-13 03:48:41.779215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.803 [2024-12-13 03:48:41.779422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.803 [2024-12-13 03:48:41.779434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.803 [2024-12-13 03:48:41.779444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.803 [2024-12-13 03:48:41.779453] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.803 [2024-12-13 03:48:41.791996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.803 [2024-12-13 03:48:41.792449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.803 [2024-12-13 03:48:41.792471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.803 [2024-12-13 03:48:41.792482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.803 [2024-12-13 03:48:41.792687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.803 [2024-12-13 03:48:41.792928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.803 [2024-12-13 03:48:41.792942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.803 [2024-12-13 03:48:41.792952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.803 [2024-12-13 03:48:41.792962] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.803 [2024-12-13 03:48:41.805442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.803 [2024-12-13 03:48:41.805788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.804 [2024-12-13 03:48:41.805809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.804 [2024-12-13 03:48:41.805819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.804 [2024-12-13 03:48:41.806018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.804 [2024-12-13 03:48:41.806213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.804 [2024-12-13 03:48:41.806225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.804 [2024-12-13 03:48:41.806234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.804 [2024-12-13 03:48:41.806243] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.804 [2024-12-13 03:48:41.818836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.804 [2024-12-13 03:48:41.819292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.804 [2024-12-13 03:48:41.819315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.804 [2024-12-13 03:48:41.819325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.804 [2024-12-13 03:48:41.819520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.804 [2024-12-13 03:48:41.819714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.804 [2024-12-13 03:48:41.819725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.804 [2024-12-13 03:48:41.819735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.804 [2024-12-13 03:48:41.819744] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.804 [2024-12-13 03:48:41.832220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.804 [2024-12-13 03:48:41.832684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.804 [2024-12-13 03:48:41.832707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.804 [2024-12-13 03:48:41.832717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.804 [2024-12-13 03:48:41.832930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.804 [2024-12-13 03:48:41.833137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.804 [2024-12-13 03:48:41.833149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.804 [2024-12-13 03:48:41.833162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.804 [2024-12-13 03:48:41.833172] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.804 [2024-12-13 03:48:41.845652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.804 [2024-12-13 03:48:41.846117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.804 [2024-12-13 03:48:41.846146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.804 [2024-12-13 03:48:41.846157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.804 [2024-12-13 03:48:41.846364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.804 [2024-12-13 03:48:41.846570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.804 [2024-12-13 03:48:41.846582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.804 [2024-12-13 03:48:41.846592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.804 [2024-12-13 03:48:41.846602] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.804 [2024-12-13 03:48:41.858989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.804 [2024-12-13 03:48:41.859401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.804 [2024-12-13 03:48:41.859423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.804 [2024-12-13 03:48:41.859433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.804 [2024-12-13 03:48:41.859628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.804 [2024-12-13 03:48:41.859821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.804 [2024-12-13 03:48:41.859832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.804 [2024-12-13 03:48:41.859841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.804 [2024-12-13 03:48:41.859850] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.804 [2024-12-13 03:48:41.872516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.804 [2024-12-13 03:48:41.872886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.804 [2024-12-13 03:48:41.872908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.804 [2024-12-13 03:48:41.872924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.804 [2024-12-13 03:48:41.873129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.804 [2024-12-13 03:48:41.873336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.804 [2024-12-13 03:48:41.873347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.804 [2024-12-13 03:48:41.873357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.804 [2024-12-13 03:48:41.873367] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.804 [2024-12-13 03:48:41.886121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.804 [2024-12-13 03:48:41.886557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.804 [2024-12-13 03:48:41.886579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.804 [2024-12-13 03:48:41.886589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.804 [2024-12-13 03:48:41.886782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.804 [2024-12-13 03:48:41.887001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.804 [2024-12-13 03:48:41.887013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.804 [2024-12-13 03:48:41.887024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.804 [2024-12-13 03:48:41.887033] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.804 [2024-12-13 03:48:41.899567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.804 [2024-12-13 03:48:41.900024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.804 [2024-12-13 03:48:41.900047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.804 [2024-12-13 03:48:41.900058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.804 [2024-12-13 03:48:41.900252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.804 [2024-12-13 03:48:41.900446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.804 [2024-12-13 03:48:41.900457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.804 [2024-12-13 03:48:41.900466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.804 [2024-12-13 03:48:41.900475] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.804 [2024-12-13 03:48:41.913107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.804 [2024-12-13 03:48:41.913551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.804 [2024-12-13 03:48:41.913574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.804 [2024-12-13 03:48:41.913584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.804 [2024-12-13 03:48:41.913792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.804 [2024-12-13 03:48:41.914005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.804 [2024-12-13 03:48:41.914018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.804 [2024-12-13 03:48:41.914028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.804 [2024-12-13 03:48:41.914038] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.804 [2024-12-13 03:48:41.926618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.804 [2024-12-13 03:48:41.927074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.804 [2024-12-13 03:48:41.927101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.804 [2024-12-13 03:48:41.927112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.804 [2024-12-13 03:48:41.927318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.804 [2024-12-13 03:48:41.927524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.804 [2024-12-13 03:48:41.927535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.804 [2024-12-13 03:48:41.927544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.804 [2024-12-13 03:48:41.927554] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.804 [2024-12-13 03:48:41.940202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.804 [2024-12-13 03:48:41.940570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.805 [2024-12-13 03:48:41.940591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.805 [2024-12-13 03:48:41.940601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.805 [2024-12-13 03:48:41.940806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.805 [2024-12-13 03:48:41.941017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.805 [2024-12-13 03:48:41.941029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.805 [2024-12-13 03:48:41.941039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.805 [2024-12-13 03:48:41.941048] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.805 [2024-12-13 03:48:41.953703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.805 [2024-12-13 03:48:41.954177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.805 [2024-12-13 03:48:41.954199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.805 [2024-12-13 03:48:41.954211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.805 [2024-12-13 03:48:41.954418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.805 [2024-12-13 03:48:41.954625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.805 [2024-12-13 03:48:41.954638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.805 [2024-12-13 03:48:41.954647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.805 [2024-12-13 03:48:41.954658] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.805 [2024-12-13 03:48:41.967265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.805 [2024-12-13 03:48:41.967754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.805 [2024-12-13 03:48:41.967777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.805 [2024-12-13 03:48:41.967788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.805 [2024-12-13 03:48:41.968021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.805 [2024-12-13 03:48:41.968228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.805 [2024-12-13 03:48:41.968240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.805 [2024-12-13 03:48:41.968250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.805 [2024-12-13 03:48:41.968260] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.805 [2024-12-13 03:48:41.980784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.805 [2024-12-13 03:48:41.981271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.805 [2024-12-13 03:48:41.981294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.805 [2024-12-13 03:48:41.981306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.805 [2024-12-13 03:48:41.981511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.805 [2024-12-13 03:48:41.981718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.805 [2024-12-13 03:48:41.981730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.805 [2024-12-13 03:48:41.981740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.805 [2024-12-13 03:48:41.981750] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.805 [2024-12-13 03:48:41.994074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.805 [2024-12-13 03:48:41.994424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.805 [2024-12-13 03:48:41.994445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.805 [2024-12-13 03:48:41.994455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.805 [2024-12-13 03:48:41.994649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.805 [2024-12-13 03:48:41.994843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.805 [2024-12-13 03:48:41.994854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.805 [2024-12-13 03:48:41.994863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.805 [2024-12-13 03:48:41.994872] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:40.805 [2024-12-13 03:48:42.007395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:40.805 [2024-12-13 03:48:42.007877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:40.805 [2024-12-13 03:48:42.007900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:40.805 [2024-12-13 03:48:42.007910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:40.805 [2024-12-13 03:48:42.008111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:40.805 [2024-12-13 03:48:42.008305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:40.805 [2024-12-13 03:48:42.008320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:40.805 [2024-12-13 03:48:42.008329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:40.805 [2024-12-13 03:48:42.008338] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.065 [2024-12-13 03:48:42.020579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.065 [2024-12-13 03:48:42.021033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-12-13 03:48:42.021055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.065 [2024-12-13 03:48:42.021065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.065 [2024-12-13 03:48:42.021271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.065 [2024-12-13 03:48:42.021449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.065 [2024-12-13 03:48:42.021460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.065 [2024-12-13 03:48:42.021468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.065 [2024-12-13 03:48:42.021477] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.065 [2024-12-13 03:48:42.033717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.065 [2024-12-13 03:48:42.034178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.065 [2024-12-13 03:48:42.034236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.065 [2024-12-13 03:48:42.034268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.065 [2024-12-13 03:48:42.034726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.065 [2024-12-13 03:48:42.034915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.066 [2024-12-13 03:48:42.034931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.066 [2024-12-13 03:48:42.034940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.066 [2024-12-13 03:48:42.034949] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.066 [2024-12-13 03:48:42.046790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.066 [2024-12-13 03:48:42.047220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-12-13 03:48:42.047241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.066 [2024-12-13 03:48:42.047251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.066 [2024-12-13 03:48:42.047439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.066 [2024-12-13 03:48:42.047627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.066 [2024-12-13 03:48:42.047638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.066 [2024-12-13 03:48:42.047653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.066 [2024-12-13 03:48:42.047661] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.066 [2024-12-13 03:48:42.059888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.066 [2024-12-13 03:48:42.060358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-12-13 03:48:42.060417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.066 [2024-12-13 03:48:42.060449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.066 [2024-12-13 03:48:42.060938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.066 [2024-12-13 03:48:42.061143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.066 [2024-12-13 03:48:42.061154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.066 [2024-12-13 03:48:42.061163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.066 [2024-12-13 03:48:42.061172] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.066 [2024-12-13 03:48:42.072902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.066 [2024-12-13 03:48:42.073352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-12-13 03:48:42.073372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.066 [2024-12-13 03:48:42.073382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.066 [2024-12-13 03:48:42.073561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.066 [2024-12-13 03:48:42.073740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.066 [2024-12-13 03:48:42.073751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.066 [2024-12-13 03:48:42.073759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.066 [2024-12-13 03:48:42.073767] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.066 [2024-12-13 03:48:42.085953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.066 [2024-12-13 03:48:42.086405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-12-13 03:48:42.086462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.066 [2024-12-13 03:48:42.086494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.066 [2024-12-13 03:48:42.087003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.066 [2024-12-13 03:48:42.087192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.066 [2024-12-13 03:48:42.087202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.066 [2024-12-13 03:48:42.087212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.066 [2024-12-13 03:48:42.087220] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.066 [2024-12-13 03:48:42.099017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.066 [2024-12-13 03:48:42.099436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-12-13 03:48:42.099456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.066 [2024-12-13 03:48:42.099466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.066 [2024-12-13 03:48:42.099643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.066 [2024-12-13 03:48:42.099822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.066 [2024-12-13 03:48:42.099832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.066 [2024-12-13 03:48:42.099841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.066 [2024-12-13 03:48:42.099849] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.066 [2024-12-13 03:48:42.112078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.066 [2024-12-13 03:48:42.112504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-12-13 03:48:42.112564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.066 [2024-12-13 03:48:42.112595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.066 [2024-12-13 03:48:42.113096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.066 [2024-12-13 03:48:42.113290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.066 [2024-12-13 03:48:42.113301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.066 [2024-12-13 03:48:42.113310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.066 [2024-12-13 03:48:42.113319] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.066 [2024-12-13 03:48:42.125231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.066 [2024-12-13 03:48:42.125611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-12-13 03:48:42.125632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.066 [2024-12-13 03:48:42.125642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.066 [2024-12-13 03:48:42.125830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.066 [2024-12-13 03:48:42.126025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.066 [2024-12-13 03:48:42.126037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.066 [2024-12-13 03:48:42.126046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.066 [2024-12-13 03:48:42.126055] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.066 [2024-12-13 03:48:42.138351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.066 [2024-12-13 03:48:42.138779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-12-13 03:48:42.138801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.066 [2024-12-13 03:48:42.138813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.066 [2024-12-13 03:48:42.139009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.066 [2024-12-13 03:48:42.139198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.066 [2024-12-13 03:48:42.139210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.066 [2024-12-13 03:48:42.139218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.066 [2024-12-13 03:48:42.139227] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.066 [2024-12-13 03:48:42.151451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.066 [2024-12-13 03:48:42.151899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-12-13 03:48:42.151925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.066 [2024-12-13 03:48:42.151936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.066 [2024-12-13 03:48:42.152124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.066 [2024-12-13 03:48:42.152313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.066 [2024-12-13 03:48:42.152324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.066 [2024-12-13 03:48:42.152333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.066 [2024-12-13 03:48:42.152342] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.066 [2024-12-13 03:48:42.164576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.066 [2024-12-13 03:48:42.165057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.066 [2024-12-13 03:48:42.165117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.066 [2024-12-13 03:48:42.165150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.066 [2024-12-13 03:48:42.165800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.066 [2024-12-13 03:48:42.166163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.066 [2024-12-13 03:48:42.166175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.067 [2024-12-13 03:48:42.166184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.067 [2024-12-13 03:48:42.166193] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.067 [2024-12-13 03:48:42.177765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.067 [2024-12-13 03:48:42.178175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-12-13 03:48:42.178197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.067 [2024-12-13 03:48:42.178206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.067 [2024-12-13 03:48:42.178398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.067 [2024-12-13 03:48:42.178587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.067 [2024-12-13 03:48:42.178617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.067 [2024-12-13 03:48:42.178626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.067 [2024-12-13 03:48:42.178635] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.067 [2024-12-13 03:48:42.190988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.067 [2024-12-13 03:48:42.191434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-12-13 03:48:42.191456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.067 [2024-12-13 03:48:42.191466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.067 [2024-12-13 03:48:42.191654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.067 [2024-12-13 03:48:42.191843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.067 [2024-12-13 03:48:42.191854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.067 [2024-12-13 03:48:42.191863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.067 [2024-12-13 03:48:42.191872] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.067 [2024-12-13 03:48:42.204103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.067 [2024-12-13 03:48:42.204547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-12-13 03:48:42.204569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.067 [2024-12-13 03:48:42.204579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.067 [2024-12-13 03:48:42.204768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.067 [2024-12-13 03:48:42.204980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.067 [2024-12-13 03:48:42.204994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.067 [2024-12-13 03:48:42.205004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.067 [2024-12-13 03:48:42.205013] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.067 [2024-12-13 03:48:42.217449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.067 [2024-12-13 03:48:42.217904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-12-13 03:48:42.217978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.067 [2024-12-13 03:48:42.218010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.067 [2024-12-13 03:48:42.218440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.067 [2024-12-13 03:48:42.218634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.067 [2024-12-13 03:48:42.218654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.067 [2024-12-13 03:48:42.218664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.067 [2024-12-13 03:48:42.218673] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.067 [2024-12-13 03:48:42.230941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.067 [2024-12-13 03:48:42.231379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-12-13 03:48:42.231402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.067 [2024-12-13 03:48:42.231413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.067 [2024-12-13 03:48:42.231601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.067 [2024-12-13 03:48:42.231789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.067 [2024-12-13 03:48:42.231801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.067 [2024-12-13 03:48:42.231810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.067 [2024-12-13 03:48:42.231819] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.067 [2024-12-13 03:48:42.243952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.067 [2024-12-13 03:48:42.244419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-12-13 03:48:42.244480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.067 [2024-12-13 03:48:42.244513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.067 [2024-12-13 03:48:42.245086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.067 [2024-12-13 03:48:42.245354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.067 [2024-12-13 03:48:42.245371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.067 [2024-12-13 03:48:42.245386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.067 [2024-12-13 03:48:42.245399] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.067 [2024-12-13 03:48:42.257923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.067 [2024-12-13 03:48:42.258392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-12-13 03:48:42.258451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.067 [2024-12-13 03:48:42.258483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.067 [2024-12-13 03:48:42.259003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.067 [2024-12-13 03:48:42.259210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.067 [2024-12-13 03:48:42.259222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.067 [2024-12-13 03:48:42.259232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.067 [2024-12-13 03:48:42.259245] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.067 [2024-12-13 03:48:42.271275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.067 [2024-12-13 03:48:42.271732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.067 [2024-12-13 03:48:42.271790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.067 [2024-12-13 03:48:42.271822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.067 [2024-12-13 03:48:42.272367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.067 [2024-12-13 03:48:42.272562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.067 [2024-12-13 03:48:42.272573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.067 [2024-12-13 03:48:42.272582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.067 [2024-12-13 03:48:42.272591] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.327 [2024-12-13 03:48:42.284421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.327 [2024-12-13 03:48:42.284884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.327 [2024-12-13 03:48:42.284954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.328 [2024-12-13 03:48:42.284987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.328 [2024-12-13 03:48:42.285446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.328 [2024-12-13 03:48:42.285635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.328 [2024-12-13 03:48:42.285645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.328 [2024-12-13 03:48:42.285654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.328 [2024-12-13 03:48:42.285663] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.328 [2024-12-13 03:48:42.297483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.328 [2024-12-13 03:48:42.297909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.328 [2024-12-13 03:48:42.297935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.328 [2024-12-13 03:48:42.297945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.328 [2024-12-13 03:48:42.298124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.328 [2024-12-13 03:48:42.298302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.328 [2024-12-13 03:48:42.298312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.328 [2024-12-13 03:48:42.298320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.328 [2024-12-13 03:48:42.298329] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.328 [2024-12-13 03:48:42.310806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.328 [2024-12-13 03:48:42.311266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.328 [2024-12-13 03:48:42.311287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.328 [2024-12-13 03:48:42.311297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.328 [2024-12-13 03:48:42.311486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.328 [2024-12-13 03:48:42.311674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.328 [2024-12-13 03:48:42.311685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.328 [2024-12-13 03:48:42.311694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.328 [2024-12-13 03:48:42.311703] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.328 [2024-12-13 03:48:42.323853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.328 [2024-12-13 03:48:42.324314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.328 [2024-12-13 03:48:42.324336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.328 [2024-12-13 03:48:42.324346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.328 [2024-12-13 03:48:42.324535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.328 [2024-12-13 03:48:42.324723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.328 [2024-12-13 03:48:42.324735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.328 [2024-12-13 03:48:42.324743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.328 [2024-12-13 03:48:42.324752] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.328 [2024-12-13 03:48:42.336982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.328 [2024-12-13 03:48:42.337455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.328 [2024-12-13 03:48:42.337511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.328 [2024-12-13 03:48:42.337543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.328 [2024-12-13 03:48:42.338207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.328 [2024-12-13 03:48:42.338576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.328 [2024-12-13 03:48:42.338587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.328 [2024-12-13 03:48:42.338595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.328 [2024-12-13 03:48:42.338604] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.328 [2024-12-13 03:48:42.350111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.328 [2024-12-13 03:48:42.350536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.328 [2024-12-13 03:48:42.350556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.328 [2024-12-13 03:48:42.350572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.328 [2024-12-13 03:48:42.350751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.328 [2024-12-13 03:48:42.350936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.328 [2024-12-13 03:48:42.350964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.328 [2024-12-13 03:48:42.350973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.328 [2024-12-13 03:48:42.350982] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.328 [2024-12-13 03:48:42.363192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.328 [2024-12-13 03:48:42.363596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.328 [2024-12-13 03:48:42.363617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.328 [2024-12-13 03:48:42.363627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.328 [2024-12-13 03:48:42.363815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.328 [2024-12-13 03:48:42.364011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.328 [2024-12-13 03:48:42.364023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.328 [2024-12-13 03:48:42.364032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.328 [2024-12-13 03:48:42.364040] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.328 [2024-12-13 03:48:42.376275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.328 [2024-12-13 03:48:42.376724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.328 [2024-12-13 03:48:42.376745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.328 [2024-12-13 03:48:42.376755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.328 [2024-12-13 03:48:42.376950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.328 [2024-12-13 03:48:42.377160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.328 [2024-12-13 03:48:42.377171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.328 [2024-12-13 03:48:42.377180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.328 [2024-12-13 03:48:42.377189] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.328 [2024-12-13 03:48:42.389474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.328 [2024-12-13 03:48:42.389942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.328 [2024-12-13 03:48:42.389964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.328 [2024-12-13 03:48:42.389974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.328 [2024-12-13 03:48:42.390168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.328 [2024-12-13 03:48:42.390351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.328 [2024-12-13 03:48:42.390362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.328 [2024-12-13 03:48:42.390371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.328 [2024-12-13 03:48:42.390380] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.328 [2024-12-13 03:48:42.402536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.328 [2024-12-13 03:48:42.402931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.328 [2024-12-13 03:48:42.402953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.328 [2024-12-13 03:48:42.402963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.328 [2024-12-13 03:48:42.403151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.328 [2024-12-13 03:48:42.403340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.328 [2024-12-13 03:48:42.403351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.328 [2024-12-13 03:48:42.403360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.328 [2024-12-13 03:48:42.403376] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.328 [2024-12-13 03:48:42.415823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.328 [2024-12-13 03:48:42.416291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.329 [2024-12-13 03:48:42.416313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.329 [2024-12-13 03:48:42.416323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.329 [2024-12-13 03:48:42.416511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.329 [2024-12-13 03:48:42.416700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.329 [2024-12-13 03:48:42.416711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.329 [2024-12-13 03:48:42.416720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.329 [2024-12-13 03:48:42.416729] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.329 [2024-12-13 03:48:42.429010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.329 [2024-12-13 03:48:42.429388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.329 [2024-12-13 03:48:42.429446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.329 [2024-12-13 03:48:42.429477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.329 [2024-12-13 03:48:42.429982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.329 [2024-12-13 03:48:42.430176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.329 [2024-12-13 03:48:42.430188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.329 [2024-12-13 03:48:42.430200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.329 [2024-12-13 03:48:42.430210] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.329 [2024-12-13 03:48:42.442272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.329 [2024-12-13 03:48:42.442710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.329 [2024-12-13 03:48:42.442769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.329 [2024-12-13 03:48:42.442802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.329 [2024-12-13 03:48:42.443465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.329 [2024-12-13 03:48:42.443891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.329 [2024-12-13 03:48:42.443902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.329 [2024-12-13 03:48:42.443912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.329 [2024-12-13 03:48:42.443925] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.329 [2024-12-13 03:48:42.455464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.329 [2024-12-13 03:48:42.455926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.329 [2024-12-13 03:48:42.455948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.329 [2024-12-13 03:48:42.455958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.329 [2024-12-13 03:48:42.456147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.329 [2024-12-13 03:48:42.456336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.329 [2024-12-13 03:48:42.456347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.329 [2024-12-13 03:48:42.456357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.329 [2024-12-13 03:48:42.456366] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.329 [2024-12-13 03:48:42.468784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.329 [2024-12-13 03:48:42.469261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.329 [2024-12-13 03:48:42.469319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.329 [2024-12-13 03:48:42.469350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.329 [2024-12-13 03:48:42.469854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.329 [2024-12-13 03:48:42.470054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.329 [2024-12-13 03:48:42.470066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.329 [2024-12-13 03:48:42.470076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.329 [2024-12-13 03:48:42.470085] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.329 [2024-12-13 03:48:42.482066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.329 [2024-12-13 03:48:42.482541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.329 [2024-12-13 03:48:42.482598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.329 [2024-12-13 03:48:42.482630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.329 [2024-12-13 03:48:42.483291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.329 [2024-12-13 03:48:42.483842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.329 [2024-12-13 03:48:42.483853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.329 [2024-12-13 03:48:42.483862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.329 [2024-12-13 03:48:42.483871] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.329 [2024-12-13 03:48:42.495424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.329 [2024-12-13 03:48:42.495858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.329 [2024-12-13 03:48:42.495880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.329 [2024-12-13 03:48:42.495890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.329 [2024-12-13 03:48:42.496104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.329 [2024-12-13 03:48:42.496298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.329 [2024-12-13 03:48:42.496310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.329 [2024-12-13 03:48:42.496319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.329 [2024-12-13 03:48:42.496328] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.329 [2024-12-13 03:48:42.508571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.329 [2024-12-13 03:48:42.509029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.329 [2024-12-13 03:48:42.509088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.329 [2024-12-13 03:48:42.509120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.329 [2024-12-13 03:48:42.509769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.329 [2024-12-13 03:48:42.510089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.329 [2024-12-13 03:48:42.510101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.329 [2024-12-13 03:48:42.510110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.329 [2024-12-13 03:48:42.510119] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.329 [2024-12-13 03:48:42.521639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.329 [2024-12-13 03:48:42.522062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.329 [2024-12-13 03:48:42.522086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.329 [2024-12-13 03:48:42.522096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.329 [2024-12-13 03:48:42.522274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.329 [2024-12-13 03:48:42.522453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.329 [2024-12-13 03:48:42.522463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.329 [2024-12-13 03:48:42.522472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.329 [2024-12-13 03:48:42.522480] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.590 [2024-12-13 03:48:42.535030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.590 [2024-12-13 03:48:42.535475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.590 [2024-12-13 03:48:42.535496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.590 [2024-12-13 03:48:42.535506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.590 [2024-12-13 03:48:42.535700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.590 [2024-12-13 03:48:42.535893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.590 [2024-12-13 03:48:42.535904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.590 [2024-12-13 03:48:42.535913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.590 [2024-12-13 03:48:42.535930] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.590 [2024-12-13 03:48:42.548206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.590 [2024-12-13 03:48:42.548635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.590 [2024-12-13 03:48:42.548694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.590 [2024-12-13 03:48:42.548726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.590 [2024-12-13 03:48:42.549194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.590 [2024-12-13 03:48:42.549383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.590 [2024-12-13 03:48:42.549395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.590 [2024-12-13 03:48:42.549403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.590 [2024-12-13 03:48:42.549412] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.590 [2024-12-13 03:48:42.561344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.590 [2024-12-13 03:48:42.561787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.590 [2024-12-13 03:48:42.561845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.590 [2024-12-13 03:48:42.561876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.590 [2024-12-13 03:48:42.562553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.590 [2024-12-13 03:48:42.563012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.590 [2024-12-13 03:48:42.563023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.590 [2024-12-13 03:48:42.563032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.590 [2024-12-13 03:48:42.563041] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.590 [2024-12-13 03:48:42.574504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.590 [2024-12-13 03:48:42.574955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.590 [2024-12-13 03:48:42.574977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.590 [2024-12-13 03:48:42.574987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.590 [2024-12-13 03:48:42.575176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.590 [2024-12-13 03:48:42.575365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.590 [2024-12-13 03:48:42.575376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.590 [2024-12-13 03:48:42.575385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.590 [2024-12-13 03:48:42.575394] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.590 [2024-12-13 03:48:42.587764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.590 [2024-12-13 03:48:42.588234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.590 [2024-12-13 03:48:42.588254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.590 [2024-12-13 03:48:42.588264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.590 [2024-12-13 03:48:42.588452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.590 [2024-12-13 03:48:42.588640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.590 [2024-12-13 03:48:42.588652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.590 [2024-12-13 03:48:42.588660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.590 [2024-12-13 03:48:42.588669] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.590 [2024-12-13 03:48:42.600958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.590 [2024-12-13 03:48:42.601345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.590 [2024-12-13 03:48:42.601366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.590 [2024-12-13 03:48:42.601376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.590 [2024-12-13 03:48:42.601564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.590 [2024-12-13 03:48:42.601756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.590 [2024-12-13 03:48:42.601767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.590 [2024-12-13 03:48:42.601776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.590 [2024-12-13 03:48:42.601785] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.590 [2024-12-13 03:48:42.614112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.590 [2024-12-13 03:48:42.614553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.590 [2024-12-13 03:48:42.614573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.590 [2024-12-13 03:48:42.614584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.590 [2024-12-13 03:48:42.614772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.590 [2024-12-13 03:48:42.614983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.590 [2024-12-13 03:48:42.614996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.590 [2024-12-13 03:48:42.615005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.590 [2024-12-13 03:48:42.615014] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.590 [2024-12-13 03:48:42.627190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.590 [2024-12-13 03:48:42.627615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.590 [2024-12-13 03:48:42.627635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.590 [2024-12-13 03:48:42.627645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.590 [2024-12-13 03:48:42.627823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.590 [2024-12-13 03:48:42.628026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.590 [2024-12-13 03:48:42.628038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.590 [2024-12-13 03:48:42.628047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.590 [2024-12-13 03:48:42.628056] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.590 [2024-12-13 03:48:42.640241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.590 [2024-12-13 03:48:42.640678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.590 [2024-12-13 03:48:42.640698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.590 [2024-12-13 03:48:42.640707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.590 [2024-12-13 03:48:42.640885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.590 [2024-12-13 03:48:42.641091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.590 [2024-12-13 03:48:42.641103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.590 [2024-12-13 03:48:42.641115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.590 [2024-12-13 03:48:42.641124] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.590 4406.60 IOPS, 17.21 MiB/s [2024-12-13T02:48:42.799Z] [2024-12-13 03:48:42.654565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.590 [2024-12-13 03:48:42.655013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.591 [2024-12-13 03:48:42.655035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.591 [2024-12-13 03:48:42.655046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.591 [2024-12-13 03:48:42.655235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.591 [2024-12-13 03:48:42.655423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.591 [2024-12-13 03:48:42.655434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.591 [2024-12-13 03:48:42.655443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.591 [2024-12-13 03:48:42.655452] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.591 [2024-12-13 03:48:42.667673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.591 [2024-12-13 03:48:42.668096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.591 [2024-12-13 03:48:42.668117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.591 [2024-12-13 03:48:42.668126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.591 [2024-12-13 03:48:42.668305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.591 [2024-12-13 03:48:42.668484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.591 [2024-12-13 03:48:42.668494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.591 [2024-12-13 03:48:42.668503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.591 [2024-12-13 03:48:42.668511] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.591 [2024-12-13 03:48:42.680891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.591 [2024-12-13 03:48:42.681318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.591 [2024-12-13 03:48:42.681339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.591 [2024-12-13 03:48:42.681349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.591 [2024-12-13 03:48:42.681537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.591 [2024-12-13 03:48:42.681725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.591 [2024-12-13 03:48:42.681736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.591 [2024-12-13 03:48:42.681745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.591 [2024-12-13 03:48:42.681754] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.591 [2024-12-13 03:48:42.694018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.591 [2024-12-13 03:48:42.694449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.591 [2024-12-13 03:48:42.694470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.591 [2024-12-13 03:48:42.694480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.591 [2024-12-13 03:48:42.694668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.591 [2024-12-13 03:48:42.694857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.591 [2024-12-13 03:48:42.694868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.591 [2024-12-13 03:48:42.694877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.591 [2024-12-13 03:48:42.694886] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.591 [2024-12-13 03:48:42.707126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.591 [2024-12-13 03:48:42.707576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.591 [2024-12-13 03:48:42.707598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.591 [2024-12-13 03:48:42.707608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.591 [2024-12-13 03:48:42.707796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.591 [2024-12-13 03:48:42.708008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.591 [2024-12-13 03:48:42.708020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.591 [2024-12-13 03:48:42.708030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.591 [2024-12-13 03:48:42.708040] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.591 [2024-12-13 03:48:42.720603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.591 [2024-12-13 03:48:42.721059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.591 [2024-12-13 03:48:42.721091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.591 [2024-12-13 03:48:42.721101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.591 [2024-12-13 03:48:42.721289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.591 [2024-12-13 03:48:42.721478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.591 [2024-12-13 03:48:42.721488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.591 [2024-12-13 03:48:42.721497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.591 [2024-12-13 03:48:42.721506] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.591 [2024-12-13 03:48:42.733798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.591 [2024-12-13 03:48:42.734289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.591 [2024-12-13 03:48:42.734357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.591 [2024-12-13 03:48:42.734389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.591 [2024-12-13 03:48:42.735055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.591 [2024-12-13 03:48:42.735511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.591 [2024-12-13 03:48:42.735522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.591 [2024-12-13 03:48:42.735531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.591 [2024-12-13 03:48:42.735540] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.591 [2024-12-13 03:48:42.746885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.591 [2024-12-13 03:48:42.747343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.591 [2024-12-13 03:48:42.747365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.591 [2024-12-13 03:48:42.747375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.591 [2024-12-13 03:48:42.747564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.591 [2024-12-13 03:48:42.747752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.591 [2024-12-13 03:48:42.747762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.591 [2024-12-13 03:48:42.747771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.591 [2024-12-13 03:48:42.747780] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.591 [2024-12-13 03:48:42.759992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.591 [2024-12-13 03:48:42.760436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.591 [2024-12-13 03:48:42.760457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.591 [2024-12-13 03:48:42.760466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.591 [2024-12-13 03:48:42.760654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.591 [2024-12-13 03:48:42.760842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.591 [2024-12-13 03:48:42.760854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.591 [2024-12-13 03:48:42.760862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.591 [2024-12-13 03:48:42.760871] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.591 [2024-12-13 03:48:42.773086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.591 [2024-12-13 03:48:42.773534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.591 [2024-12-13 03:48:42.773555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.591 [2024-12-13 03:48:42.773564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.591 [2024-12-13 03:48:42.773756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.591 [2024-12-13 03:48:42.773951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.591 [2024-12-13 03:48:42.773962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.591 [2024-12-13 03:48:42.773971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.591 [2024-12-13 03:48:42.773980] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.591 [2024-12-13 03:48:42.786101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.591 [2024-12-13 03:48:42.786518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.592 [2024-12-13 03:48:42.786538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.592 [2024-12-13 03:48:42.786575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.592 [2024-12-13 03:48:42.787243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.592 [2024-12-13 03:48:42.787714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.592 [2024-12-13 03:48:42.787725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.592 [2024-12-13 03:48:42.787734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.592 [2024-12-13 03:48:42.787743] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.852 [2024-12-13 03:48:42.799428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.852 [2024-12-13 03:48:42.799878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.852 [2024-12-13 03:48:42.799898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.852 [2024-12-13 03:48:42.799908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.852 [2024-12-13 03:48:42.800103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.852 [2024-12-13 03:48:42.800292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.852 [2024-12-13 03:48:42.800303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.852 [2024-12-13 03:48:42.800312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.852 [2024-12-13 03:48:42.800321] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.852 [2024-12-13 03:48:42.812488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.852 [2024-12-13 03:48:42.812938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.852 [2024-12-13 03:48:42.812960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.852 [2024-12-13 03:48:42.812970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.852 [2024-12-13 03:48:42.813159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.852 [2024-12-13 03:48:42.813348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.852 [2024-12-13 03:48:42.813362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.852 [2024-12-13 03:48:42.813371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.852 [2024-12-13 03:48:42.813380] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.852 [2024-12-13 03:48:42.825656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.852 [2024-12-13 03:48:42.826104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.852 [2024-12-13 03:48:42.826126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.852 [2024-12-13 03:48:42.826136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.852 [2024-12-13 03:48:42.826324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.852 [2024-12-13 03:48:42.826512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.853 [2024-12-13 03:48:42.826523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.853 [2024-12-13 03:48:42.826532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.853 [2024-12-13 03:48:42.826540] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.853 [2024-12-13 03:48:42.838754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.853 [2024-12-13 03:48:42.839222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.853 [2024-12-13 03:48:42.839279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.853 [2024-12-13 03:48:42.839312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.853 [2024-12-13 03:48:42.839761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.853 [2024-12-13 03:48:42.839956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.853 [2024-12-13 03:48:42.839967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.853 [2024-12-13 03:48:42.839976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.853 [2024-12-13 03:48:42.839985] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.853 [2024-12-13 03:48:42.852720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.853 [2024-12-13 03:48:42.853115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.853 [2024-12-13 03:48:42.853139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.853 [2024-12-13 03:48:42.853150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.853 [2024-12-13 03:48:42.853356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.853 [2024-12-13 03:48:42.853561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.853 [2024-12-13 03:48:42.853573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.853 [2024-12-13 03:48:42.853582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.853 [2024-12-13 03:48:42.853595] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.853 [2024-12-13 03:48:42.865737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.853 [2024-12-13 03:48:42.866190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.853 [2024-12-13 03:48:42.866248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.853 [2024-12-13 03:48:42.866280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.853 [2024-12-13 03:48:42.866795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.853 [2024-12-13 03:48:42.866990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.853 [2024-12-13 03:48:42.867003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.853 [2024-12-13 03:48:42.867012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.853 [2024-12-13 03:48:42.867021] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.853 [2024-12-13 03:48:42.879020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.853 [2024-12-13 03:48:42.879483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.853 [2024-12-13 03:48:42.879541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.853 [2024-12-13 03:48:42.879572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.853 [2024-12-13 03:48:42.880237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.853 [2024-12-13 03:48:42.880748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.853 [2024-12-13 03:48:42.880766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.853 [2024-12-13 03:48:42.880780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.853 [2024-12-13 03:48:42.880794] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.853 [2024-12-13 03:48:42.893081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.853 [2024-12-13 03:48:42.893532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.853 [2024-12-13 03:48:42.893554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.853 [2024-12-13 03:48:42.893565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.853 [2024-12-13 03:48:42.893769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.853 [2024-12-13 03:48:42.893981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.853 [2024-12-13 03:48:42.893994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.853 [2024-12-13 03:48:42.894004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.853 [2024-12-13 03:48:42.894014] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.853 [2024-12-13 03:48:42.906095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.853 [2024-12-13 03:48:42.906520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.853 [2024-12-13 03:48:42.906540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.853 [2024-12-13 03:48:42.906549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.853 [2024-12-13 03:48:42.906727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.853 [2024-12-13 03:48:42.906905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.853 [2024-12-13 03:48:42.906915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.853 [2024-12-13 03:48:42.906932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.853 [2024-12-13 03:48:42.906940] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2911093 Killed "${NVMF_APP[@]}" "$@" 00:37:41.853 03:48:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:37:41.853 03:48:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:41.853 03:48:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:41.853 03:48:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:41.853 03:48:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:41.853 03:48:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2912670 00:37:41.853 03:48:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2912670 00:37:41.853 03:48:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:41.853 03:48:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2912670 ']' 00:37:41.853 03:48:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:41.853 03:48:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:41.853 [2024-12-13 03:48:42.919443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.853 03:48:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:41.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:41.853 [2024-12-13 03:48:42.919886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.853 03:48:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:41.853 [2024-12-13 03:48:42.919909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.853 [2024-12-13 03:48:42.919926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.853 [2024-12-13 03:48:42.920119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.853 03:48:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:41.853 [2024-12-13 03:48:42.920314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.853 [2024-12-13 03:48:42.920325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.853 [2024-12-13 03:48:42.920335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.853 [2024-12-13 03:48:42.920347] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.853 [2024-12-13 03:48:42.932745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.853 [2024-12-13 03:48:42.933221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.853 [2024-12-13 03:48:42.933242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.853 [2024-12-13 03:48:42.933252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.853 [2024-12-13 03:48:42.933446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.853 [2024-12-13 03:48:42.933640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.853 [2024-12-13 03:48:42.933651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.853 [2024-12-13 03:48:42.933660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.853 [2024-12-13 03:48:42.933668] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.853 [2024-12-13 03:48:42.946103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.853 [2024-12-13 03:48:42.946545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.853 [2024-12-13 03:48:42.946567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.854 [2024-12-13 03:48:42.946578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.854 [2024-12-13 03:48:42.946771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.854 [2024-12-13 03:48:42.946971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.854 [2024-12-13 03:48:42.946984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.854 [2024-12-13 03:48:42.946993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.854 [2024-12-13 03:48:42.947002] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.854 [2024-12-13 03:48:42.959436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.854 [2024-12-13 03:48:42.959928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.854 [2024-12-13 03:48:42.959951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.854 [2024-12-13 03:48:42.959962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.854 [2024-12-13 03:48:42.960158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.854 [2024-12-13 03:48:42.960352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.854 [2024-12-13 03:48:42.960363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.854 [2024-12-13 03:48:42.960373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.854 [2024-12-13 03:48:42.960383] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.854 [2024-12-13 03:48:42.972782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.854 [2024-12-13 03:48:42.973259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.854 [2024-12-13 03:48:42.973281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.854 [2024-12-13 03:48:42.973291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.854 [2024-12-13 03:48:42.973486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.854 [2024-12-13 03:48:42.973689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.854 [2024-12-13 03:48:42.973700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.854 [2024-12-13 03:48:42.973710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.854 [2024-12-13 03:48:42.973720] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.854 [2024-12-13 03:48:42.986141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.854 [2024-12-13 03:48:42.986613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.854 [2024-12-13 03:48:42.986651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.854 [2024-12-13 03:48:42.986663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.854 [2024-12-13 03:48:42.986856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.854 [2024-12-13 03:48:42.987060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.854 [2024-12-13 03:48:42.987072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.854 [2024-12-13 03:48:42.987081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.854 [2024-12-13 03:48:42.987091] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.854 [2024-12-13 03:48:42.999556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.854 [2024-12-13 03:48:43.000001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.854 [2024-12-13 03:48:43.000024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.854 [2024-12-13 03:48:43.000035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.854 [2024-12-13 03:48:43.000230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.854 [2024-12-13 03:48:43.000427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.854 [2024-12-13 03:48:43.000438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.854 [2024-12-13 03:48:43.000447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.854 [2024-12-13 03:48:43.000457] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.854 [2024-12-13 03:48:43.000870] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:37:41.854 [2024-12-13 03:48:43.000950] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:41.854 [2024-12-13 03:48:43.012913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.854 [2024-12-13 03:48:43.013329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.854 [2024-12-13 03:48:43.013350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.854 [2024-12-13 03:48:43.013360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.854 [2024-12-13 03:48:43.013556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.854 [2024-12-13 03:48:43.013752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.854 [2024-12-13 03:48:43.013764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.854 [2024-12-13 03:48:43.013773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.854 [2024-12-13 03:48:43.013782] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.854 [2024-12-13 03:48:43.026440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.854 [2024-12-13 03:48:43.026859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.854 [2024-12-13 03:48:43.026884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.854 [2024-12-13 03:48:43.026896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.854 [2024-12-13 03:48:43.027114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.854 [2024-12-13 03:48:43.027323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.854 [2024-12-13 03:48:43.027337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.854 [2024-12-13 03:48:43.027349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.854 [2024-12-13 03:48:43.027359] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.854 [2024-12-13 03:48:43.039839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.854 [2024-12-13 03:48:43.040294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.854 [2024-12-13 03:48:43.040318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.854 [2024-12-13 03:48:43.040330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.854 [2024-12-13 03:48:43.040529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.854 [2024-12-13 03:48:43.040727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.854 [2024-12-13 03:48:43.040740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.854 [2024-12-13 03:48:43.040749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.854 [2024-12-13 03:48:43.040758] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:41.854 [2024-12-13 03:48:43.053416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:41.854 [2024-12-13 03:48:43.053880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:41.854 [2024-12-13 03:48:43.053904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:41.854 [2024-12-13 03:48:43.053926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:41.854 [2024-12-13 03:48:43.054154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:41.854 [2024-12-13 03:48:43.054376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:41.854 [2024-12-13 03:48:43.054390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:41.854 [2024-12-13 03:48:43.054401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:41.854 [2024-12-13 03:48:43.054413] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.115 [2024-12-13 03:48:43.067053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.115 [2024-12-13 03:48:43.067478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-12-13 03:48:43.067501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.115 [2024-12-13 03:48:43.067511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.115 [2024-12-13 03:48:43.067707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.115 [2024-12-13 03:48:43.067905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.115 [2024-12-13 03:48:43.067925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.115 [2024-12-13 03:48:43.067935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.115 [2024-12-13 03:48:43.067945] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.115 [2024-12-13 03:48:43.080426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.115 [2024-12-13 03:48:43.080805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-12-13 03:48:43.080828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.115 [2024-12-13 03:48:43.080839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.115 [2024-12-13 03:48:43.081042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.115 [2024-12-13 03:48:43.081239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.115 [2024-12-13 03:48:43.081252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.115 [2024-12-13 03:48:43.081262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.115 [2024-12-13 03:48:43.081272] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.115 [2024-12-13 03:48:43.093866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.115 [2024-12-13 03:48:43.094301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-12-13 03:48:43.094323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.115 [2024-12-13 03:48:43.094333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.115 [2024-12-13 03:48:43.094529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.115 [2024-12-13 03:48:43.094723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.115 [2024-12-13 03:48:43.094734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.115 [2024-12-13 03:48:43.094743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.115 [2024-12-13 03:48:43.094752] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.115 [2024-12-13 03:48:43.107223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.115 [2024-12-13 03:48:43.107550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-12-13 03:48:43.107572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.115 [2024-12-13 03:48:43.107582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.115 [2024-12-13 03:48:43.107779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.115 [2024-12-13 03:48:43.107981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.115 [2024-12-13 03:48:43.107993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.115 [2024-12-13 03:48:43.108003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.115 [2024-12-13 03:48:43.108013] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.115 [2024-12-13 03:48:43.120554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.115 [2024-12-13 03:48:43.120945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.115 [2024-12-13 03:48:43.120968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.115 [2024-12-13 03:48:43.120979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.115 [2024-12-13 03:48:43.121176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.115 [2024-12-13 03:48:43.121374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.115 [2024-12-13 03:48:43.121387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.115 [2024-12-13 03:48:43.121396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.115 [2024-12-13 03:48:43.121406] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.115 [2024-12-13 03:48:43.125226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:42.115 [2024-12-13 03:48:43.133931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.116 [2024-12-13 03:48:43.134240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-12-13 03:48:43.134262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.116 [2024-12-13 03:48:43.134273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.116 [2024-12-13 03:48:43.134469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.116 [2024-12-13 03:48:43.134666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.116 [2024-12-13 03:48:43.134681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.116 [2024-12-13 03:48:43.134690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.116 [2024-12-13 03:48:43.134700] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.116 [2024-12-13 03:48:43.147168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.116 [2024-12-13 03:48:43.147572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-12-13 03:48:43.147595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.116 [2024-12-13 03:48:43.147605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.116 [2024-12-13 03:48:43.147798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.116 [2024-12-13 03:48:43.147994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.116 [2024-12-13 03:48:43.148006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.116 [2024-12-13 03:48:43.148016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.116 [2024-12-13 03:48:43.148025] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.116 [2024-12-13 03:48:43.160540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.116 [2024-12-13 03:48:43.160981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-12-13 03:48:43.161003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.116 [2024-12-13 03:48:43.161014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.116 [2024-12-13 03:48:43.161204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.116 [2024-12-13 03:48:43.161396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.116 [2024-12-13 03:48:43.161407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.116 [2024-12-13 03:48:43.161432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.116 [2024-12-13 03:48:43.161444] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.116 [2024-12-13 03:48:43.173788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.116 [2024-12-13 03:48:43.174184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-12-13 03:48:43.174206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.116 [2024-12-13 03:48:43.174217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.116 [2024-12-13 03:48:43.174408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.116 [2024-12-13 03:48:43.174600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.116 [2024-12-13 03:48:43.174611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.116 [2024-12-13 03:48:43.174624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.116 [2024-12-13 03:48:43.174633] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.116 [2024-12-13 03:48:43.187164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.116 [2024-12-13 03:48:43.187555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-12-13 03:48:43.187577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.116 [2024-12-13 03:48:43.187587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.116 [2024-12-13 03:48:43.187779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.116 [2024-12-13 03:48:43.187981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.116 [2024-12-13 03:48:43.187994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.116 [2024-12-13 03:48:43.188004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.116 [2024-12-13 03:48:43.188013] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.116 [2024-12-13 03:48:43.200481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.116 [2024-12-13 03:48:43.200968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-12-13 03:48:43.200991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.116 [2024-12-13 03:48:43.201002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.116 [2024-12-13 03:48:43.201205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.116 [2024-12-13 03:48:43.201396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.116 [2024-12-13 03:48:43.201407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.116 [2024-12-13 03:48:43.201416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.116 [2024-12-13 03:48:43.201425] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.116 [2024-12-13 03:48:43.213845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.116 [2024-12-13 03:48:43.214186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-12-13 03:48:43.214209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.116 [2024-12-13 03:48:43.214219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.116 [2024-12-13 03:48:43.214416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.116 [2024-12-13 03:48:43.214618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.116 [2024-12-13 03:48:43.214632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.116 [2024-12-13 03:48:43.214642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.116 [2024-12-13 03:48:43.214651] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.116 [2024-12-13 03:48:43.227281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.116 [2024-12-13 03:48:43.227708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-12-13 03:48:43.227732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.116 [2024-12-13 03:48:43.227744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.116 [2024-12-13 03:48:43.227950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.116 [2024-12-13 03:48:43.228149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.116 [2024-12-13 03:48:43.228160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.116 [2024-12-13 03:48:43.228169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.116 [2024-12-13 03:48:43.228179] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.116 [2024-12-13 03:48:43.235759] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:42.116 [2024-12-13 03:48:43.235788] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:42.116 [2024-12-13 03:48:43.235799] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:42.116 [2024-12-13 03:48:43.235824] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:42.116 [2024-12-13 03:48:43.235833] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:42.116 [2024-12-13 03:48:43.238085] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:37:42.116 [2024-12-13 03:48:43.238154] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:42.116 [2024-12-13 03:48:43.238163] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:37:42.116 [2024-12-13 03:48:43.240644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.116 [2024-12-13 03:48:43.241082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-12-13 03:48:43.241107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.116 [2024-12-13 03:48:43.241118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.116 [2024-12-13 03:48:43.241319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.116 [2024-12-13 03:48:43.241518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.116 [2024-12-13 03:48:43.241530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.116 [2024-12-13 03:48:43.241540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.116 [2024-12-13 03:48:43.241549] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.116 [2024-12-13 03:48:43.254130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.116 [2024-12-13 03:48:43.254590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.116 [2024-12-13 03:48:43.254613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.116 [2024-12-13 03:48:43.254624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.116 [2024-12-13 03:48:43.254823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.117 [2024-12-13 03:48:43.255033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.117 [2024-12-13 03:48:43.255046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.117 [2024-12-13 03:48:43.255056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.117 [2024-12-13 03:48:43.255066] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.117 [2024-12-13 03:48:43.267565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.117 [2024-12-13 03:48:43.267948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-12-13 03:48:43.267973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.117 [2024-12-13 03:48:43.267984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.117 [2024-12-13 03:48:43.268181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.117 [2024-12-13 03:48:43.268378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.117 [2024-12-13 03:48:43.268391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.117 [2024-12-13 03:48:43.268401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.117 [2024-12-13 03:48:43.268410] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.117 [2024-12-13 03:48:43.280945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.117 [2024-12-13 03:48:43.281271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-12-13 03:48:43.281295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.117 [2024-12-13 03:48:43.281306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.117 [2024-12-13 03:48:43.281503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.117 [2024-12-13 03:48:43.281701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.117 [2024-12-13 03:48:43.281713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.117 [2024-12-13 03:48:43.281722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.117 [2024-12-13 03:48:43.281732] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.117 [2024-12-13 03:48:43.294393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.117 [2024-12-13 03:48:43.294822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-12-13 03:48:43.294843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.117 [2024-12-13 03:48:43.294854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.117 [2024-12-13 03:48:43.295058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.117 [2024-12-13 03:48:43.295255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.117 [2024-12-13 03:48:43.295266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.117 [2024-12-13 03:48:43.295279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.117 [2024-12-13 03:48:43.295288] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.117 [2024-12-13 03:48:43.307749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.117 [2024-12-13 03:48:43.308133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-12-13 03:48:43.308156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.117 [2024-12-13 03:48:43.308166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.117 [2024-12-13 03:48:43.308365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.117 [2024-12-13 03:48:43.308562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.117 [2024-12-13 03:48:43.308574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.117 [2024-12-13 03:48:43.308584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.117 [2024-12-13 03:48:43.308593] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.117 [2024-12-13 03:48:43.321108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.117 [2024-12-13 03:48:43.321525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.117 [2024-12-13 03:48:43.321551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.117 [2024-12-13 03:48:43.321563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.117 [2024-12-13 03:48:43.321762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.117 [2024-12-13 03:48:43.321970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.117 [2024-12-13 03:48:43.321982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.117 [2024-12-13 03:48:43.321993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.117 [2024-12-13 03:48:43.322003] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.377 [2024-12-13 03:48:43.334560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.377 [2024-12-13 03:48:43.334921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.377 [2024-12-13 03:48:43.334946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.378 [2024-12-13 03:48:43.334957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.378 [2024-12-13 03:48:43.335156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.378 [2024-12-13 03:48:43.335355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.378 [2024-12-13 03:48:43.335367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.378 [2024-12-13 03:48:43.335377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.378 [2024-12-13 03:48:43.335388] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.378 [2024-12-13 03:48:43.347923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.378 [2024-12-13 03:48:43.348309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.378 [2024-12-13 03:48:43.348332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.378 [2024-12-13 03:48:43.348342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.378 [2024-12-13 03:48:43.348539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.378 [2024-12-13 03:48:43.348737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.378 [2024-12-13 03:48:43.348749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.378 [2024-12-13 03:48:43.348759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.378 [2024-12-13 03:48:43.348769] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.378 [2024-12-13 03:48:43.361263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.378 [2024-12-13 03:48:43.361581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.378 [2024-12-13 03:48:43.361602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.378 [2024-12-13 03:48:43.361612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.378 [2024-12-13 03:48:43.361809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.378 [2024-12-13 03:48:43.362015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.378 [2024-12-13 03:48:43.362027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.378 [2024-12-13 03:48:43.362037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.378 [2024-12-13 03:48:43.362046] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.378 [2024-12-13 03:48:43.374711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.378 [2024-12-13 03:48:43.375168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.378 [2024-12-13 03:48:43.375190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.378 [2024-12-13 03:48:43.375201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.378 [2024-12-13 03:48:43.375397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.378 [2024-12-13 03:48:43.375595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.378 [2024-12-13 03:48:43.375607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.378 [2024-12-13 03:48:43.375616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.378 [2024-12-13 03:48:43.375625] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.378 [2024-12-13 03:48:43.388081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.378 [2024-12-13 03:48:43.388406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.378 [2024-12-13 03:48:43.388431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.378 [2024-12-13 03:48:43.388442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.378 [2024-12-13 03:48:43.388638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.378 [2024-12-13 03:48:43.388834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.378 [2024-12-13 03:48:43.388845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.378 [2024-12-13 03:48:43.388854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.378 [2024-12-13 03:48:43.388863] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.378 [2024-12-13 03:48:43.401513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.378 [2024-12-13 03:48:43.401831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.378 [2024-12-13 03:48:43.401853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.378 [2024-12-13 03:48:43.401864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.378 [2024-12-13 03:48:43.402065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.378 [2024-12-13 03:48:43.402261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.378 [2024-12-13 03:48:43.402272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.378 [2024-12-13 03:48:43.402282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.378 [2024-12-13 03:48:43.402291] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.378 [2024-12-13 03:48:43.414908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.378 [2024-12-13 03:48:43.415334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.378 [2024-12-13 03:48:43.415355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.378 [2024-12-13 03:48:43.415366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.378 [2024-12-13 03:48:43.415562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.378 [2024-12-13 03:48:43.415758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.378 [2024-12-13 03:48:43.415769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.378 [2024-12-13 03:48:43.415779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.378 [2024-12-13 03:48:43.415788] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.378 [2024-12-13 03:48:43.428216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.378 [2024-12-13 03:48:43.428590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.378 [2024-12-13 03:48:43.428611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.378 [2024-12-13 03:48:43.428622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.378 [2024-12-13 03:48:43.428823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.378 [2024-12-13 03:48:43.429027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.378 [2024-12-13 03:48:43.429039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.378 [2024-12-13 03:48:43.429049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.378 [2024-12-13 03:48:43.429058] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.378 [2024-12-13 03:48:43.441649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.378 [2024-12-13 03:48:43.442071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.378 [2024-12-13 03:48:43.442095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.378 [2024-12-13 03:48:43.442111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.378 [2024-12-13 03:48:43.442307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.378 [2024-12-13 03:48:43.442504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.378 [2024-12-13 03:48:43.442516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.378 [2024-12-13 03:48:43.442525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.378 [2024-12-13 03:48:43.442535] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.378 [2024-12-13 03:48:43.454974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.378 [2024-12-13 03:48:43.455422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.378 [2024-12-13 03:48:43.455446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.378 [2024-12-13 03:48:43.455456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.378 [2024-12-13 03:48:43.455653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.378 [2024-12-13 03:48:43.455850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.378 [2024-12-13 03:48:43.455861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.378 [2024-12-13 03:48:43.455871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.378 [2024-12-13 03:48:43.455880] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.378 [2024-12-13 03:48:43.468367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.378 [2024-12-13 03:48:43.468837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.378 [2024-12-13 03:48:43.468861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.379 [2024-12-13 03:48:43.468872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.379 [2024-12-13 03:48:43.469079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.379 [2024-12-13 03:48:43.469280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.379 [2024-12-13 03:48:43.469298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.379 [2024-12-13 03:48:43.469308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.379 [2024-12-13 03:48:43.469318] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.379 [2024-12-13 03:48:43.481810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.379 [2024-12-13 03:48:43.482261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.379 [2024-12-13 03:48:43.482283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.379 [2024-12-13 03:48:43.482295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.379 [2024-12-13 03:48:43.482493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.379 [2024-12-13 03:48:43.482693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.379 [2024-12-13 03:48:43.482704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.379 [2024-12-13 03:48:43.482714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.379 [2024-12-13 03:48:43.482724] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.379 [2024-12-13 03:48:43.495190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.379 [2024-12-13 03:48:43.495634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.379 [2024-12-13 03:48:43.495657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.379 [2024-12-13 03:48:43.495668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.379 [2024-12-13 03:48:43.495867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.379 [2024-12-13 03:48:43.496073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.379 [2024-12-13 03:48:43.496086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.379 [2024-12-13 03:48:43.496096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.379 [2024-12-13 03:48:43.496107] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.379 [2024-12-13 03:48:43.508582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.379 [2024-12-13 03:48:43.508944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.379 [2024-12-13 03:48:43.508966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.379 [2024-12-13 03:48:43.508976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.379 [2024-12-13 03:48:43.509172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.379 [2024-12-13 03:48:43.509369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.379 [2024-12-13 03:48:43.509380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.379 [2024-12-13 03:48:43.509392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.379 [2024-12-13 03:48:43.509402] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.379 [2024-12-13 03:48:43.521869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.379 [2024-12-13 03:48:43.522306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.379 [2024-12-13 03:48:43.522328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.379 [2024-12-13 03:48:43.522338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.379 [2024-12-13 03:48:43.522533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.379 [2024-12-13 03:48:43.522730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.379 [2024-12-13 03:48:43.522742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.379 [2024-12-13 03:48:43.522751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.379 [2024-12-13 03:48:43.522760] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.379 [2024-12-13 03:48:43.535190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.379 [2024-12-13 03:48:43.535611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.379 [2024-12-13 03:48:43.535633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.379 [2024-12-13 03:48:43.535644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.379 [2024-12-13 03:48:43.535839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.379 [2024-12-13 03:48:43.536040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.379 [2024-12-13 03:48:43.536052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.379 [2024-12-13 03:48:43.536061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.379 [2024-12-13 03:48:43.536071] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.379 [2024-12-13 03:48:43.548512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.379 [2024-12-13 03:48:43.548858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.379 [2024-12-13 03:48:43.548914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.379 [2024-12-13 03:48:43.548932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.379 [2024-12-13 03:48:43.549129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.379 [2024-12-13 03:48:43.549325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.379 [2024-12-13 03:48:43.549337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.379 [2024-12-13 03:48:43.549346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.379 [2024-12-13 03:48:43.549356] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.379 [2024-12-13 03:48:43.561942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.379 [2024-12-13 03:48:43.562387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.379 [2024-12-13 03:48:43.562409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.379 [2024-12-13 03:48:43.562421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.379 [2024-12-13 03:48:43.562616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.379 [2024-12-13 03:48:43.562812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.379 [2024-12-13 03:48:43.562823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.379 [2024-12-13 03:48:43.562832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.379 [2024-12-13 03:48:43.562841] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.379 [2024-12-13 03:48:43.575278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.379 [2024-12-13 03:48:43.575719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.379 [2024-12-13 03:48:43.575741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.379 [2024-12-13 03:48:43.575752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.379 [2024-12-13 03:48:43.575954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.379 [2024-12-13 03:48:43.576150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.379 [2024-12-13 03:48:43.576161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.379 [2024-12-13 03:48:43.576171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.379 [2024-12-13 03:48:43.576180] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.639 [2024-12-13 03:48:43.588608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.639 [2024-12-13 03:48:43.589057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.639 [2024-12-13 03:48:43.589082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.639 [2024-12-13 03:48:43.589093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.639 [2024-12-13 03:48:43.589290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.639 [2024-12-13 03:48:43.589487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.639 [2024-12-13 03:48:43.589498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.639 [2024-12-13 03:48:43.589508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.639 [2024-12-13 03:48:43.589517] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.639 [2024-12-13 03:48:43.601951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.639 [2024-12-13 03:48:43.602367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.639 [2024-12-13 03:48:43.602389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.639 [2024-12-13 03:48:43.602402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.639 [2024-12-13 03:48:43.602599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.639 [2024-12-13 03:48:43.602796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.639 [2024-12-13 03:48:43.602809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.639 [2024-12-13 03:48:43.602818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.639 [2024-12-13 03:48:43.602827] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.640 [2024-12-13 03:48:43.615278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.640 [2024-12-13 03:48:43.615716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.640 [2024-12-13 03:48:43.615737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.640 [2024-12-13 03:48:43.615748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.640 [2024-12-13 03:48:43.615948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.640 [2024-12-13 03:48:43.616144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.640 [2024-12-13 03:48:43.616156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.640 [2024-12-13 03:48:43.616166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.640 [2024-12-13 03:48:43.616175] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.640 [2024-12-13 03:48:43.628584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.640 [2024-12-13 03:48:43.628974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.640 [2024-12-13 03:48:43.628998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.640 [2024-12-13 03:48:43.629009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.640 [2024-12-13 03:48:43.629206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.640 [2024-12-13 03:48:43.629402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.640 [2024-12-13 03:48:43.629415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.640 [2024-12-13 03:48:43.629424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.640 [2024-12-13 03:48:43.629433] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.640 [2024-12-13 03:48:43.642025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.640 [2024-12-13 03:48:43.642470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.640 [2024-12-13 03:48:43.642492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.640 [2024-12-13 03:48:43.642503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.640 [2024-12-13 03:48:43.642701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.640 [2024-12-13 03:48:43.642897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.640 [2024-12-13 03:48:43.642908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.640 [2024-12-13 03:48:43.642922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.640 [2024-12-13 03:48:43.642932] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.640 3672.17 IOPS, 14.34 MiB/s [2024-12-13T02:48:43.849Z] [2024-12-13 03:48:43.656714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.640 [2024-12-13 03:48:43.657079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.640 [2024-12-13 03:48:43.657102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.640 [2024-12-13 03:48:43.657112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.640 [2024-12-13 03:48:43.657306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.640 [2024-12-13 03:48:43.657501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.640 [2024-12-13 03:48:43.657513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.640 [2024-12-13 03:48:43.657522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.640 [2024-12-13 03:48:43.657531] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.640 [2024-12-13 03:48:43.670125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.640 [2024-12-13 03:48:43.670551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.640 [2024-12-13 03:48:43.670573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.640 [2024-12-13 03:48:43.670584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.640 [2024-12-13 03:48:43.670778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.640 [2024-12-13 03:48:43.670979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.640 [2024-12-13 03:48:43.670992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.640 [2024-12-13 03:48:43.671001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.640 [2024-12-13 03:48:43.671010] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.640 [2024-12-13 03:48:43.683415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.640 [2024-12-13 03:48:43.683861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.640 [2024-12-13 03:48:43.683882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.640 [2024-12-13 03:48:43.683893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.640 [2024-12-13 03:48:43.684092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.640 [2024-12-13 03:48:43.684290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.640 [2024-12-13 03:48:43.684304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.640 [2024-12-13 03:48:43.684314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.640 [2024-12-13 03:48:43.684323] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.640 [2024-12-13 03:48:43.696736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.640 [2024-12-13 03:48:43.697198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.640 [2024-12-13 03:48:43.697220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.640 [2024-12-13 03:48:43.697231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.640 [2024-12-13 03:48:43.697426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.640 [2024-12-13 03:48:43.697620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.640 [2024-12-13 03:48:43.697632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.640 [2024-12-13 03:48:43.697641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.640 [2024-12-13 03:48:43.697650] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.640 [2024-12-13 03:48:43.710054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.640 [2024-12-13 03:48:43.710510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.640 [2024-12-13 03:48:43.710531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.640 [2024-12-13 03:48:43.710542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.640 [2024-12-13 03:48:43.710737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.640 [2024-12-13 03:48:43.710937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.640 [2024-12-13 03:48:43.710949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.640 [2024-12-13 03:48:43.710959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.640 [2024-12-13 03:48:43.710969] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.640 [2024-12-13 03:48:43.723368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.640 [2024-12-13 03:48:43.723820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.640 [2024-12-13 03:48:43.723843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.640 [2024-12-13 03:48:43.723854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.640 [2024-12-13 03:48:43.724054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.640 [2024-12-13 03:48:43.724251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.640 [2024-12-13 03:48:43.724263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.640 [2024-12-13 03:48:43.724273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.640 [2024-12-13 03:48:43.724286] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.640 [2024-12-13 03:48:43.736715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.640 [2024-12-13 03:48:43.737109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.640 [2024-12-13 03:48:43.737132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.640 [2024-12-13 03:48:43.737143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.640 [2024-12-13 03:48:43.737345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.640 [2024-12-13 03:48:43.737541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.640 [2024-12-13 03:48:43.737554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.640 [2024-12-13 03:48:43.737564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.640 [2024-12-13 03:48:43.737573] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.640 [2024-12-13 03:48:43.750112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.641 [2024-12-13 03:48:43.750480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.641 [2024-12-13 03:48:43.750503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.641 [2024-12-13 03:48:43.750514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.641 [2024-12-13 03:48:43.750709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.641 [2024-12-13 03:48:43.750944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.641 [2024-12-13 03:48:43.750957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.641 [2024-12-13 03:48:43.750966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.641 [2024-12-13 03:48:43.750975] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.641 [2024-12-13 03:48:43.763551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.641 [2024-12-13 03:48:43.763992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.641 [2024-12-13 03:48:43.764014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.641 [2024-12-13 03:48:43.764024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.641 [2024-12-13 03:48:43.764219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.641 [2024-12-13 03:48:43.764415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.641 [2024-12-13 03:48:43.764427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.641 [2024-12-13 03:48:43.764437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.641 [2024-12-13 03:48:43.764445] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.641 [2024-12-13 03:48:43.776852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.641 [2024-12-13 03:48:43.777311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.641 [2024-12-13 03:48:43.777333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.641 [2024-12-13 03:48:43.777343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.641 [2024-12-13 03:48:43.777538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.641 [2024-12-13 03:48:43.777733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.641 [2024-12-13 03:48:43.777745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.641 [2024-12-13 03:48:43.777754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.641 [2024-12-13 03:48:43.777763] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.641 [2024-12-13 03:48:43.790169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.641 [2024-12-13 03:48:43.790588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.641 [2024-12-13 03:48:43.790610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.641 [2024-12-13 03:48:43.790620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.641 [2024-12-13 03:48:43.790814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.641 [2024-12-13 03:48:43.791015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.641 [2024-12-13 03:48:43.791027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.641 [2024-12-13 03:48:43.791036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.641 [2024-12-13 03:48:43.791045] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.641 03:48:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:42.641 03:48:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:37:42.641 03:48:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:42.641 03:48:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:42.641 03:48:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:42.641 [2024-12-13 03:48:43.803515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.641 [2024-12-13 03:48:43.803896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.641 [2024-12-13 03:48:43.803923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.641 [2024-12-13 03:48:43.803934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.641 [2024-12-13 03:48:43.804129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.641 [2024-12-13 03:48:43.804324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.641 [2024-12-13 03:48:43.804336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.641 [2024-12-13 03:48:43.804346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.641 [2024-12-13 03:48:43.804358] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.641 [2024-12-13 03:48:43.816961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.641 [2024-12-13 03:48:43.817292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.641 [2024-12-13 03:48:43.817315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.641 [2024-12-13 03:48:43.817326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.641 [2024-12-13 03:48:43.817522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.641 [2024-12-13 03:48:43.817718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.641 [2024-12-13 03:48:43.817729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.641 [2024-12-13 03:48:43.817738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.641 [2024-12-13 03:48:43.817748] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.641 [2024-12-13 03:48:43.830343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.641 [2024-12-13 03:48:43.830666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.641 [2024-12-13 03:48:43.830688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.641 [2024-12-13 03:48:43.830698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.641 [2024-12-13 03:48:43.830893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.641 [2024-12-13 03:48:43.831096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.641 [2024-12-13 03:48:43.831109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.641 [2024-12-13 03:48:43.831121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.641 [2024-12-13 03:48:43.831131] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.641 03:48:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:42.641 03:48:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:42.641 03:48:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.641 03:48:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:42.641 [2024-12-13 03:48:43.842476] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:42.641 [2024-12-13 03:48:43.843723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.641 [2024-12-13 03:48:43.844093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.641 [2024-12-13 03:48:43.844115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.641 [2024-12-13 03:48:43.844126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.641 [2024-12-13 03:48:43.844320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.641 [2024-12-13 03:48:43.844517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.641 [2024-12-13 03:48:43.844528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.641 [2024-12-13 03:48:43.844540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.641 [2024-12-13 03:48:43.844550] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.901 03:48:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.901 03:48:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:42.901 03:48:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.901 03:48:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:42.901 [2024-12-13 03:48:43.857168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.901 [2024-12-13 03:48:43.857481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-12-13 03:48:43.857502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.901 [2024-12-13 03:48:43.857513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.901 [2024-12-13 03:48:43.857708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.901 [2024-12-13 03:48:43.857903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.901 [2024-12-13 03:48:43.857915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.901 [2024-12-13 03:48:43.857931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.901 [2024-12-13 03:48:43.857940] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.901 [2024-12-13 03:48:43.870570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.901 [2024-12-13 03:48:43.871031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-12-13 03:48:43.871055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.901 [2024-12-13 03:48:43.871066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.901 [2024-12-13 03:48:43.871266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.901 [2024-12-13 03:48:43.871463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.901 [2024-12-13 03:48:43.871475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.901 [2024-12-13 03:48:43.871485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.901 [2024-12-13 03:48:43.871494] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.901 [2024-12-13 03:48:43.883992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.901 [2024-12-13 03:48:43.884443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.901 [2024-12-13 03:48:43.884465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.901 [2024-12-13 03:48:43.884476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.901 [2024-12-13 03:48:43.884675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.902 [2024-12-13 03:48:43.884874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.902 [2024-12-13 03:48:43.884889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.902 [2024-12-13 03:48:43.884899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.902 [2024-12-13 03:48:43.884909] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.902 [2024-12-13 03:48:43.897388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.902 [2024-12-13 03:48:43.897828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-12-13 03:48:43.897850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.902 [2024-12-13 03:48:43.897861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.902 [2024-12-13 03:48:43.898063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.902 [2024-12-13 03:48:43.898260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.902 [2024-12-13 03:48:43.898272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.902 [2024-12-13 03:48:43.898281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.902 [2024-12-13 03:48:43.898290] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.902 [2024-12-13 03:48:43.910740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.902 [2024-12-13 03:48:43.911164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-12-13 03:48:43.911186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.902 [2024-12-13 03:48:43.911197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.902 [2024-12-13 03:48:43.911394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.902 [2024-12-13 03:48:43.911590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.902 [2024-12-13 03:48:43.911602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.902 [2024-12-13 03:48:43.911612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.902 [2024-12-13 03:48:43.911621] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.902 [2024-12-13 03:48:43.924063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.902 [2024-12-13 03:48:43.924531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-12-13 03:48:43.924555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.902 [2024-12-13 03:48:43.924566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.902 [2024-12-13 03:48:43.924762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.902 [2024-12-13 03:48:43.924965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.902 [2024-12-13 03:48:43.924986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.902 [2024-12-13 03:48:43.924998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.902 [2024-12-13 03:48:43.925008] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.902 Malloc0 00:37:42.902 03:48:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.902 03:48:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:42.902 03:48:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.902 03:48:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:42.902 [2024-12-13 03:48:43.937432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.902 [2024-12-13 03:48:43.937874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-12-13 03:48:43.937895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.902 [2024-12-13 03:48:43.937906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.902 [2024-12-13 03:48:43.938110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.902 [2024-12-13 03:48:43.938311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.902 [2024-12-13 03:48:43.938323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.902 [2024-12-13 03:48:43.938332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.902 [2024-12-13 03:48:43.938342] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.902 03:48:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.902 03:48:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:42.902 03:48:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.902 03:48:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:42.902 [2024-12-13 03:48:43.950784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.902 [2024-12-13 03:48:43.951236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.902 [2024-12-13 03:48:43.951258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:42.902 [2024-12-13 03:48:43.951269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:42.902 [2024-12-13 03:48:43.951465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:42.902 [2024-12-13 03:48:43.951660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:42.902 [2024-12-13 03:48:43.951671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:42.902 [2024-12-13 03:48:43.951681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:42.902 [2024-12-13 03:48:43.951690] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:42.902 03:48:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.902 03:48:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:42.902 03:48:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.902 03:48:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:42.902 [2024-12-13 03:48:43.955817] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:42.902 03:48:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.902 03:48:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2911769 00:37:42.902 [2024-12-13 03:48:43.964095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:42.902 [2024-12-13 03:48:43.993425] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:37:44.851 4051.00 IOPS, 15.82 MiB/s [2024-12-13T02:48:46.995Z] 4774.50 IOPS, 18.65 MiB/s [2024-12-13T02:48:47.931Z] 5346.78 IOPS, 20.89 MiB/s [2024-12-13T02:48:48.868Z] 5779.90 IOPS, 22.58 MiB/s [2024-12-13T02:48:49.804Z] 6139.45 IOPS, 23.98 MiB/s [2024-12-13T02:48:50.741Z] 6434.75 IOPS, 25.14 MiB/s [2024-12-13T02:48:52.118Z] 6682.77 IOPS, 26.10 MiB/s [2024-12-13T02:48:52.686Z] 6897.57 IOPS, 26.94 MiB/s [2024-12-13T02:48:52.686Z] 7085.20 IOPS, 27.68 MiB/s 00:37:51.477 Latency(us) 00:37:51.477 [2024-12-13T02:48:52.686Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:51.477 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:51.477 Verification LBA range: start 0x0 length 0x4000 00:37:51.477 Nvme1n1 : 15.02 7086.49 27.68 11758.07 0.00 6770.81 514.93 28960.67 00:37:51.477 [2024-12-13T02:48:52.686Z] =================================================================================================================== 00:37:51.477 [2024-12-13T02:48:52.686Z] Total : 7086.49 27.68 11758.07 0.00 6770.81 514.93 28960.67 00:37:52.416 03:48:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:37:52.416 03:48:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:52.416 03:48:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:52.416 03:48:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:52.416 03:48:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:52.416 03:48:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:37:52.416 03:48:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:37:52.416 03:48:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:52.416 03:48:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:37:52.416 03:48:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:52.416 03:48:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:37:52.416 03:48:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:52.416 03:48:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:52.416 rmmod nvme_tcp 00:37:52.676 rmmod nvme_fabrics 00:37:52.676 rmmod nvme_keyring 00:37:52.676 03:48:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:52.676 03:48:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:37:52.676 03:48:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:37:52.676 03:48:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2912670 ']' 00:37:52.676 03:48:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2912670 00:37:52.676 03:48:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2912670 ']' 00:37:52.676 03:48:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2912670 00:37:52.676 03:48:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:37:52.676 03:48:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:52.676 03:48:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2912670 00:37:52.676 03:48:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:52.676 03:48:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:52.676 03:48:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2912670' 00:37:52.676 killing process with pid 2912670 00:37:52.676 03:48:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2912670 00:37:52.676 03:48:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2912670 00:37:54.056 03:48:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:54.056 03:48:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:54.056 03:48:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:54.056 03:48:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:37:54.056 03:48:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:37:54.056 03:48:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:37:54.056 03:48:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:54.056 03:48:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:54.056 03:48:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:54.056 03:48:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:54.056 03:48:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:54.056 03:48:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:56.006 03:48:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:56.006 00:37:56.006 real 0m29.771s 00:37:56.006 user 1m14.079s 00:37:56.006 sys 0m6.724s 00:37:56.006 03:48:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:56.006 03:48:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:56.006 ************************************ 00:37:56.006 END TEST nvmf_bdevperf 00:37:56.006 ************************************ 00:37:56.006 03:48:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:56.006 03:48:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:56.006 03:48:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:56.006 03:48:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:56.006 ************************************ 00:37:56.006 START TEST nvmf_target_disconnect 00:37:56.006 ************************************ 00:37:56.006 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:56.286 * Looking for test storage... 00:37:56.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:56.286 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:56.286 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:37:56.286 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:56.286 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:56.286 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:56.286 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:56.286 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:56.286 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:37:56.286 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:37:56.286 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:37:56.286 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:37:56.286 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:37:56.286 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:37:56.286 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:37:56.286 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:56.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.287 --rc genhtml_branch_coverage=1 00:37:56.287 --rc genhtml_function_coverage=1 00:37:56.287 --rc genhtml_legend=1 00:37:56.287 --rc geninfo_all_blocks=1 00:37:56.287 --rc geninfo_unexecuted_blocks=1 00:37:56.287 00:37:56.287 ' 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:56.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.287 --rc genhtml_branch_coverage=1 00:37:56.287 --rc genhtml_function_coverage=1 00:37:56.287 --rc genhtml_legend=1 00:37:56.287 --rc geninfo_all_blocks=1 00:37:56.287 --rc geninfo_unexecuted_blocks=1 00:37:56.287 00:37:56.287 ' 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:56.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.287 --rc genhtml_branch_coverage=1 00:37:56.287 --rc genhtml_function_coverage=1 00:37:56.287 --rc genhtml_legend=1 00:37:56.287 --rc geninfo_all_blocks=1 00:37:56.287 --rc geninfo_unexecuted_blocks=1 00:37:56.287 00:37:56.287 ' 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:56.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.287 --rc genhtml_branch_coverage=1 00:37:56.287 --rc genhtml_function_coverage=1 00:37:56.287 --rc genhtml_legend=1 00:37:56.287 --rc geninfo_all_blocks=1 00:37:56.287 --rc geninfo_unexecuted_blocks=1 00:37:56.287 00:37:56.287 ' 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:56.287 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:56.288 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:56.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:56.288 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:56.288 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:56.288 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:56.288 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:37:56.288 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:37:56.288 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:37:56.288 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:37:56.288 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:56.288 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:56.288 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:56.288 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:56.288 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:56.288 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:56.288 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:56.288 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:56.288 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:56.288 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:56.288 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:37:56.288 03:48:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:01.566 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:01.566 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:38:01.566 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:01.566 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:01.566 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:01.566 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:01.566 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:01.566 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:38:01.566 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:01.566 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:38:01.566 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:38:01.566 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:38:01.566 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:38:01.566 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:38:01.566 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:38:01.566 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:01.566 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:01.566 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:01.566 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:01.566 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:01.566 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:01.566 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:01.566 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:01.566 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:01.566 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:01.566 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:01.566 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:01.566 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:01.566 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:01.566 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:01.566 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:01.566 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:01.567 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:01.567 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:01.567 Found net devices under 0000:af:00.0: cvl_0_0 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:01.567 Found net devices under 0000:af:00.1: cvl_0_1 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:01.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:01.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:38:01.567 00:38:01.567 --- 10.0.0.2 ping statistics --- 00:38:01.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:01.567 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:01.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:01.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:38:01.567 00:38:01.567 --- 10.0.0.1 ping statistics --- 00:38:01.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:01.567 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:01.567 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:01.827 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:38:01.827 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:01.827 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:01.827 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:01.827 ************************************ 00:38:01.827 START TEST nvmf_target_disconnect_tc1 00:38:01.827 ************************************ 00:38:01.827 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:38:01.827 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:01.827 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:38:01.827 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:01.827 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:01.827 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:01.827 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:01.827 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:01.827 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:01.827 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:01.827 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:01.827 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:38:01.827 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:01.827 [2024-12-13 03:49:02.968700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.827 [2024-12-13 03:49:02.968769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325800 with addr=10.0.0.2, port=4420 00:38:01.827 [2024-12-13 03:49:02.968842] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:38:01.827 [2024-12-13 03:49:02.968859] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:01.827 [2024-12-13 03:49:02.968871] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:38:01.827 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:38:01.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:38:01.827 Initializing NVMe Controllers 00:38:01.827 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:38:01.827 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:01.827 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:01.827 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:01.827 00:38:01.827 real 0m0.177s 00:38:01.827 user 0m0.076s 00:38:01.827 sys 0m0.100s 00:38:01.827 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:01.827 03:49:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:38:01.827 ************************************ 00:38:01.827 END TEST nvmf_target_disconnect_tc1 00:38:01.827 ************************************ 00:38:01.827 03:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:38:01.827 03:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:01.827 03:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:01.827 03:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:02.086 ************************************ 00:38:02.086 START TEST nvmf_target_disconnect_tc2 00:38:02.086 ************************************ 00:38:02.086 03:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:38:02.086 03:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:38:02.086 03:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:38:02.086 03:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:02.086 03:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:02.086 03:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:02.086 03:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2918031 00:38:02.086 03:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2918031 00:38:02.086 03:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2918031 ']' 00:38:02.086 03:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:02.086 03:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:02.086 03:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:02.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:02.086 03:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:38:02.086 03:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:02.086 03:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:02.086 [2024-12-13 03:49:03.143719] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:38:02.086 [2024-12-13 03:49:03.143822] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:02.086 [2024-12-13 03:49:03.271922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:02.345 [2024-12-13 03:49:03.387424] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:02.345 [2024-12-13 03:49:03.387464] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:02.345 [2024-12-13 03:49:03.387475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:02.345 [2024-12-13 03:49:03.387485] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:02.345 [2024-12-13 03:49:03.387493] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:02.345 [2024-12-13 03:49:03.390000] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:38:02.345 [2024-12-13 03:49:03.390090] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:38:02.345 [2024-12-13 03:49:03.390110] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:38:02.345 [2024-12-13 03:49:03.390093] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:38:02.912 03:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:02.912 03:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:38:02.912 03:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:02.912 03:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:02.912 03:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:02.912 03:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:02.912 03:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:02.912 03:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.912 03:49:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:02.912 Malloc0 00:38:02.912 03:49:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.912 03:49:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:38:02.912 03:49:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.912 03:49:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:02.912 [2024-12-13 03:49:04.077033] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:02.912 03:49:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.912 03:49:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:02.913 03:49:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.913 03:49:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:02.913 03:49:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.913 03:49:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:02.913 03:49:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.913 03:49:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:02.913 03:49:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.913 03:49:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:02.913 03:49:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.913 03:49:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:02.913 [2024-12-13 03:49:04.105336] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:02.913 03:49:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.913 03:49:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:02.913 03:49:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.913 03:49:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:02.913 03:49:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.913 03:49:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2918216 00:38:02.913 03:49:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:38:02.913 03:49:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:05.472 03:49:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2918031 00:38:05.472 03:49:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Write completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Write completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Write completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Write completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Write completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Write completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Write completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Write completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Write completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Write completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Write completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Write completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 [2024-12-13 03:49:06.141890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Write completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Write completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Write completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Write completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Write completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Write completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Write completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Write completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Write completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 [2024-12-13 03:49:06.142269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:05.472 Write completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Write completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Write completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Write completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Write completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Write completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Write completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Write completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Write completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Read completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Write completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Write completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.472 Write completed with error (sct=0, sc=8) 00:38:05.472 starting I/O failed 00:38:05.473 Write completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Read completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Write completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Write completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Write completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Write completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Read completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Read completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Write completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Write completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Write completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Read completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 [2024-12-13 03:49:06.142620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:05.473 Read completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Read completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Read completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Read completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Read completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Write completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Read completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Write completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Read completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Write completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Read completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Read completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Read completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Write completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Write completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Read completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Read completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Write completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Write completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Write completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Write completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Write completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Write completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Write completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Read completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Read completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Write completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Read completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Read completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Write completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Read completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 Read completed with error (sct=0, sc=8) 00:38:05.473 starting I/O failed 00:38:05.473 [2024-12-13 03:49:06.142979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:38:05.473 [2024-12-13 03:49:06.143215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.473 [2024-12-13 03:49:06.143240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.473 qpair failed and we were unable to recover it. 00:38:05.473 [2024-12-13 03:49:06.143385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.473 [2024-12-13 03:49:06.143408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.473 qpair failed and we were unable to recover it. 00:38:05.473 [2024-12-13 03:49:06.143574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.473 [2024-12-13 03:49:06.143590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.473 qpair failed and we were unable to recover it. 00:38:05.473 [2024-12-13 03:49:06.143759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.473 [2024-12-13 03:49:06.143773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.473 qpair failed and we were unable to recover it. 00:38:05.473 [2024-12-13 03:49:06.143888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.473 [2024-12-13 03:49:06.143901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.473 qpair failed and we were unable to recover it. 00:38:05.473 [2024-12-13 03:49:06.144013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.473 [2024-12-13 03:49:06.144026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.473 qpair failed and we were unable to recover it. 00:38:05.473 [2024-12-13 03:49:06.144135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.473 [2024-12-13 03:49:06.144148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.473 qpair failed and we were unable to recover it. 00:38:05.473 [2024-12-13 03:49:06.144247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.473 [2024-12-13 03:49:06.144260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.473 qpair failed and we were unable to recover it. 00:38:05.473 [2024-12-13 03:49:06.144375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.473 [2024-12-13 03:49:06.144388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.473 qpair failed and we were unable to recover it. 00:38:05.473 [2024-12-13 03:49:06.144478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.473 [2024-12-13 03:49:06.144492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.473 qpair failed and we were unable to recover it. 00:38:05.473 [2024-12-13 03:49:06.144581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.473 [2024-12-13 03:49:06.144594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.473 qpair failed and we were unable to recover it. 00:38:05.473 [2024-12-13 03:49:06.144687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.473 [2024-12-13 03:49:06.144701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.473 qpair failed and we were unable to recover it. 00:38:05.473 [2024-12-13 03:49:06.144800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.473 [2024-12-13 03:49:06.144814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.473 qpair failed and we were unable to recover it. 00:38:05.473 [2024-12-13 03:49:06.144901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.473 [2024-12-13 03:49:06.144914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.473 qpair failed and we were unable to recover it. 00:38:05.473 [2024-12-13 03:49:06.145007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.473 [2024-12-13 03:49:06.145021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.473 qpair failed and we were unable to recover it. 00:38:05.473 [2024-12-13 03:49:06.145118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.473 [2024-12-13 03:49:06.145131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.473 qpair failed and we were unable to recover it. 00:38:05.473 [2024-12-13 03:49:06.145209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.473 [2024-12-13 03:49:06.145221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.473 qpair failed and we were unable to recover it. 00:38:05.473 [2024-12-13 03:49:06.145325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.473 [2024-12-13 03:49:06.145338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.473 qpair failed and we were unable to recover it. 00:38:05.473 [2024-12-13 03:49:06.145431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.473 [2024-12-13 03:49:06.145444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.473 qpair failed and we were unable to recover it. 00:38:05.473 [2024-12-13 03:49:06.145615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.473 [2024-12-13 03:49:06.145629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.473 qpair failed and we were unable to recover it. 00:38:05.473 [2024-12-13 03:49:06.145709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.473 [2024-12-13 03:49:06.145723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.473 qpair failed and we were unable to recover it. 00:38:05.473 [2024-12-13 03:49:06.145820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.145835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.145921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.145940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.146026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.146041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.146121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.146138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.146229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.146243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.146346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.146360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.146432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.146446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.146590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.146604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.146696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.146710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.146789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.146803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.146897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.146911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.146994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.147008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.147152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.147166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.147246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.147262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.147340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.147357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.147503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.147516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.147669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.147683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.147775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.147789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.147972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.147986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.148067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.148081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.148230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.148243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.148334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.148347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.148431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.148445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.148543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.148556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.148693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.148705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.148778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.148791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.148886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.148900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.149059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.149072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.149161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.149175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.149251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.149263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.149333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.149346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.149417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.149430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.149500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.149513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.149580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.149593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.149676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.149689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.149898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.149913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.150073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.150086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.150156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.150170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.474 [2024-12-13 03:49:06.150321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.474 [2024-12-13 03:49:06.150334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.474 qpair failed and we were unable to recover it. 00:38:05.475 [2024-12-13 03:49:06.150405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.475 [2024-12-13 03:49:06.150418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.475 qpair failed and we were unable to recover it. 00:38:05.475 [2024-12-13 03:49:06.150553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.475 [2024-12-13 03:49:06.150567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.475 qpair failed and we were unable to recover it. 00:38:05.475 [2024-12-13 03:49:06.150651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.475 [2024-12-13 03:49:06.150667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.475 qpair failed and we were unable to recover it. 00:38:05.475 [2024-12-13 03:49:06.150742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.475 [2024-12-13 03:49:06.150755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.475 qpair failed and we were unable to recover it. 00:38:05.475 [2024-12-13 03:49:06.150826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.475 [2024-12-13 03:49:06.150840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.475 qpair failed and we were unable to recover it. 00:38:05.475 [2024-12-13 03:49:06.150943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.475 [2024-12-13 03:49:06.150957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.475 qpair failed and we were unable to recover it. 00:38:05.475 [2024-12-13 03:49:06.151054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.475 [2024-12-13 03:49:06.151071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.475 qpair failed and we were unable to recover it. 00:38:05.475 [2024-12-13 03:49:06.151149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.475 [2024-12-13 03:49:06.151167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.475 qpair failed and we were unable to recover it. 00:38:05.475 [2024-12-13 03:49:06.151242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.475 [2024-12-13 03:49:06.151260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.475 qpair failed and we were unable to recover it. 00:38:05.475 [2024-12-13 03:49:06.151357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.475 [2024-12-13 03:49:06.151375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.475 qpair failed and we were unable to recover it. 00:38:05.475 [2024-12-13 03:49:06.151452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.475 [2024-12-13 03:49:06.151469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.475 qpair failed and we were unable to recover it. 00:38:05.475 [2024-12-13 03:49:06.151645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.475 [2024-12-13 03:49:06.151670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.475 qpair failed and we were unable to recover it. 00:38:05.475 [2024-12-13 03:49:06.151790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.475 [2024-12-13 03:49:06.151818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.475 qpair failed and we were unable to recover it. 00:38:05.475 [2024-12-13 03:49:06.152079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.475 [2024-12-13 03:49:06.152099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.475 qpair failed and we were unable to recover it. 00:38:05.475 [2024-12-13 03:49:06.152199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.475 [2024-12-13 03:49:06.152216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.475 qpair failed and we were unable to recover it. 00:38:05.475 [2024-12-13 03:49:06.152363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.475 [2024-12-13 03:49:06.152380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.475 qpair failed and we were unable to recover it. 00:38:05.475 [2024-12-13 03:49:06.152461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.475 [2024-12-13 03:49:06.152479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.475 qpair failed and we were unable to recover it. 00:38:05.475 [2024-12-13 03:49:06.152560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.475 [2024-12-13 03:49:06.152577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.475 qpair failed and we were unable to recover it. 00:38:05.475 [2024-12-13 03:49:06.152654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.475 [2024-12-13 03:49:06.152671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.475 qpair failed and we were unable to recover it. 00:38:05.475 [2024-12-13 03:49:06.152829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.475 [2024-12-13 03:49:06.152847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.475 qpair failed and we were unable to recover it. 00:38:05.475 [2024-12-13 03:49:06.152927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.475 [2024-12-13 03:49:06.152945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.475 qpair failed and we were unable to recover it. 00:38:05.475 [2024-12-13 03:49:06.153101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.475 [2024-12-13 03:49:06.153118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.475 qpair failed and we were unable to recover it. 00:38:05.475 [2024-12-13 03:49:06.153200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.475 [2024-12-13 03:49:06.153218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.475 qpair failed and we were unable to recover it. 00:38:05.475 [2024-12-13 03:49:06.153387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.475 [2024-12-13 03:49:06.153405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.475 qpair failed and we were unable to recover it. 00:38:05.475 [2024-12-13 03:49:06.153488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.475 [2024-12-13 03:49:06.153507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.475 qpair failed and we were unable to recover it. 00:38:05.475 [2024-12-13 03:49:06.153587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.475 [2024-12-13 03:49:06.153604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.475 qpair failed and we were unable to recover it. 00:38:05.475 [2024-12-13 03:49:06.153681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.475 [2024-12-13 03:49:06.153699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.475 qpair failed and we were unable to recover it. 00:38:05.475 [2024-12-13 03:49:06.153789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.475 [2024-12-13 03:49:06.153806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.475 qpair failed and we were unable to recover it. 00:38:05.475 [2024-12-13 03:49:06.153895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.475 [2024-12-13 03:49:06.153912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.475 qpair failed and we were unable to recover it. 00:38:05.475 [2024-12-13 03:49:06.154016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.475 [2024-12-13 03:49:06.154035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.475 qpair failed and we were unable to recover it. 00:38:05.475 [2024-12-13 03:49:06.154198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.475 [2024-12-13 03:49:06.154215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.475 qpair failed and we were unable to recover it. 00:38:05.475 [2024-12-13 03:49:06.154359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.475 [2024-12-13 03:49:06.154376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.475 qpair failed and we were unable to recover it. 00:38:05.475 [2024-12-13 03:49:06.154539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.475 [2024-12-13 03:49:06.154557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.475 qpair failed and we were unable to recover it. 00:38:05.475 [2024-12-13 03:49:06.154659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.475 [2024-12-13 03:49:06.154676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.475 qpair failed and we were unable to recover it. 00:38:05.475 [2024-12-13 03:49:06.154760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.475 [2024-12-13 03:49:06.154777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.475 qpair failed and we were unable to recover it. 00:38:05.475 [2024-12-13 03:49:06.154859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.475 [2024-12-13 03:49:06.154877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.475 qpair failed and we were unable to recover it. 00:38:05.475 [2024-12-13 03:49:06.155100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.475 [2024-12-13 03:49:06.155118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.475 qpair failed and we were unable to recover it. 00:38:05.476 [2024-12-13 03:49:06.155263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.476 [2024-12-13 03:49:06.155277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.476 qpair failed and we were unable to recover it. 00:38:05.476 [2024-12-13 03:49:06.155352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.476 [2024-12-13 03:49:06.155366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.476 qpair failed and we were unable to recover it. 00:38:05.476 [2024-12-13 03:49:06.155460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.476 [2024-12-13 03:49:06.155474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.476 qpair failed and we were unable to recover it. 00:38:05.476 [2024-12-13 03:49:06.155547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.476 [2024-12-13 03:49:06.155561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.476 qpair failed and we were unable to recover it. 00:38:05.476 [2024-12-13 03:49:06.155731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.476 [2024-12-13 03:49:06.155746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.476 qpair failed and we were unable to recover it. 00:38:05.476 [2024-12-13 03:49:06.155826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.476 [2024-12-13 03:49:06.155843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.476 qpair failed and we were unable to recover it. 00:38:05.476 [2024-12-13 03:49:06.155928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.476 [2024-12-13 03:49:06.155943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.476 qpair failed and we were unable to recover it. 00:38:05.476 [2024-12-13 03:49:06.156021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.476 [2024-12-13 03:49:06.156035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.476 qpair failed and we were unable to recover it. 00:38:05.476 [2024-12-13 03:49:06.156104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.476 [2024-12-13 03:49:06.156117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.476 qpair failed and we were unable to recover it. 00:38:05.476 [2024-12-13 03:49:06.156342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.476 [2024-12-13 03:49:06.156357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.476 qpair failed and we were unable to recover it. 00:38:05.476 [2024-12-13 03:49:06.156458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.476 [2024-12-13 03:49:06.156473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.476 qpair failed and we were unable to recover it. 00:38:05.476 [2024-12-13 03:49:06.156608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.476 [2024-12-13 03:49:06.156622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.476 qpair failed and we were unable to recover it. 00:38:05.476 [2024-12-13 03:49:06.156697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.476 [2024-12-13 03:49:06.156712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.476 qpair failed and we were unable to recover it. 00:38:05.476 [2024-12-13 03:49:06.156805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.476 [2024-12-13 03:49:06.156819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.476 qpair failed and we were unable to recover it. 00:38:05.476 [2024-12-13 03:49:06.156898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.476 [2024-12-13 03:49:06.156912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.476 qpair failed and we were unable to recover it. 00:38:05.476 [2024-12-13 03:49:06.156994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.476 [2024-12-13 03:49:06.157009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.476 qpair failed and we were unable to recover it. 00:38:05.476 [2024-12-13 03:49:06.157089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.476 [2024-12-13 03:49:06.157102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.476 qpair failed and we were unable to recover it. 00:38:05.476 [2024-12-13 03:49:06.157254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.476 [2024-12-13 03:49:06.157269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.476 qpair failed and we were unable to recover it. 00:38:05.476 [2024-12-13 03:49:06.157338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.476 [2024-12-13 03:49:06.157351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.476 qpair failed and we were unable to recover it. 00:38:05.476 [2024-12-13 03:49:06.157517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.476 [2024-12-13 03:49:06.157532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.476 qpair failed and we were unable to recover it. 00:38:05.476 [2024-12-13 03:49:06.157611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.476 [2024-12-13 03:49:06.157624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.476 qpair failed and we were unable to recover it. 00:38:05.476 [2024-12-13 03:49:06.157698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.476 [2024-12-13 03:49:06.157712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.476 qpair failed and we were unable to recover it. 00:38:05.476 [2024-12-13 03:49:06.157855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.476 [2024-12-13 03:49:06.157869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.476 qpair failed and we were unable to recover it. 00:38:05.476 [2024-12-13 03:49:06.157958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.476 [2024-12-13 03:49:06.157972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.476 qpair failed and we were unable to recover it. 00:38:05.476 [2024-12-13 03:49:06.158056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.476 [2024-12-13 03:49:06.158070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.476 qpair failed and we were unable to recover it. 00:38:05.476 [2024-12-13 03:49:06.158148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.476 [2024-12-13 03:49:06.158161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.476 qpair failed and we were unable to recover it. 00:38:05.476 [2024-12-13 03:49:06.158255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.476 [2024-12-13 03:49:06.158270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.476 qpair failed and we were unable to recover it. 00:38:05.476 [2024-12-13 03:49:06.158350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.476 [2024-12-13 03:49:06.158364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.476 qpair failed and we were unable to recover it. 00:38:05.476 [2024-12-13 03:49:06.158445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.476 [2024-12-13 03:49:06.158459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.476 qpair failed and we were unable to recover it. 00:38:05.476 [2024-12-13 03:49:06.158540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.476 [2024-12-13 03:49:06.158555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.476 qpair failed and we were unable to recover it. 00:38:05.476 [2024-12-13 03:49:06.158642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.476 [2024-12-13 03:49:06.158656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.476 qpair failed and we were unable to recover it. 00:38:05.476 [2024-12-13 03:49:06.158733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.476 [2024-12-13 03:49:06.158747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.476 qpair failed and we were unable to recover it. 00:38:05.476 [2024-12-13 03:49:06.158866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.476 [2024-12-13 03:49:06.158881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.476 qpair failed and we were unable to recover it. 00:38:05.477 [2024-12-13 03:49:06.158947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.477 [2024-12-13 03:49:06.158960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.477 qpair failed and we were unable to recover it. 00:38:05.477 [2024-12-13 03:49:06.159025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.477 [2024-12-13 03:49:06.159038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.477 qpair failed and we were unable to recover it. 00:38:05.477 [2024-12-13 03:49:06.159123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.477 [2024-12-13 03:49:06.159136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.477 qpair failed and we were unable to recover it. 00:38:05.477 [2024-12-13 03:49:06.159200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.477 [2024-12-13 03:49:06.159214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.477 qpair failed and we were unable to recover it. 00:38:05.477 [2024-12-13 03:49:06.159294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.477 [2024-12-13 03:49:06.159309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.477 qpair failed and we were unable to recover it. 00:38:05.477 [2024-12-13 03:49:06.159449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.477 [2024-12-13 03:49:06.159464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.477 qpair failed and we were unable to recover it. 00:38:05.477 [2024-12-13 03:49:06.159542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.477 [2024-12-13 03:49:06.159557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.477 qpair failed and we were unable to recover it. 00:38:05.477 [2024-12-13 03:49:06.159652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.477 [2024-12-13 03:49:06.159666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.477 qpair failed and we were unable to recover it. 00:38:05.477 [2024-12-13 03:49:06.159731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.477 [2024-12-13 03:49:06.159744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.477 qpair failed and we were unable to recover it. 00:38:05.477 [2024-12-13 03:49:06.159887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.477 [2024-12-13 03:49:06.159902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.477 qpair failed and we were unable to recover it. 00:38:05.477 [2024-12-13 03:49:06.160009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.477 [2024-12-13 03:49:06.160031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.477 qpair failed and we were unable to recover it. 00:38:05.477 [2024-12-13 03:49:06.160197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.477 [2024-12-13 03:49:06.160217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.477 qpair failed and we were unable to recover it. 00:38:05.477 [2024-12-13 03:49:06.160322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.477 [2024-12-13 03:49:06.160348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.477 qpair failed and we were unable to recover it. 00:38:05.477 [2024-12-13 03:49:06.160434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.477 [2024-12-13 03:49:06.160450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.477 qpair failed and we were unable to recover it. 00:38:05.477 [2024-12-13 03:49:06.160541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.477 [2024-12-13 03:49:06.160555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.477 qpair failed and we were unable to recover it. 00:38:05.477 [2024-12-13 03:49:06.160630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.477 [2024-12-13 03:49:06.160644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.477 qpair failed and we were unable to recover it. 00:38:05.477 [2024-12-13 03:49:06.160791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.477 [2024-12-13 03:49:06.160805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.477 qpair failed and we were unable to recover it. 00:38:05.477 [2024-12-13 03:49:06.160963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.477 [2024-12-13 03:49:06.160981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.477 qpair failed and we were unable to recover it. 00:38:05.477 [2024-12-13 03:49:06.161146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.477 [2024-12-13 03:49:06.161160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.477 qpair failed and we were unable to recover it. 00:38:05.477 [2024-12-13 03:49:06.161232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.477 [2024-12-13 03:49:06.161246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.477 qpair failed and we were unable to recover it. 00:38:05.477 [2024-12-13 03:49:06.161331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.477 [2024-12-13 03:49:06.161344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.477 qpair failed and we were unable to recover it. 00:38:05.477 [2024-12-13 03:49:06.161424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.477 [2024-12-13 03:49:06.161438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.477 qpair failed and we were unable to recover it. 00:38:05.477 [2024-12-13 03:49:06.161529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.477 [2024-12-13 03:49:06.161543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.477 qpair failed and we were unable to recover it. 00:38:05.477 [2024-12-13 03:49:06.161616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.477 [2024-12-13 03:49:06.161629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.477 qpair failed and we were unable to recover it. 00:38:05.477 [2024-12-13 03:49:06.161725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.477 [2024-12-13 03:49:06.161739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.477 qpair failed and we were unable to recover it. 00:38:05.477 [2024-12-13 03:49:06.161808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.477 [2024-12-13 03:49:06.161822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.477 qpair failed and we were unable to recover it. 00:38:05.477 [2024-12-13 03:49:06.161994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.477 [2024-12-13 03:49:06.162011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.477 qpair failed and we were unable to recover it. 00:38:05.477 [2024-12-13 03:49:06.162147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.477 [2024-12-13 03:49:06.162162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.477 qpair failed and we were unable to recover it. 00:38:05.477 [2024-12-13 03:49:06.162233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.477 [2024-12-13 03:49:06.162246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.477 qpair failed and we were unable to recover it. 00:38:05.477 [2024-12-13 03:49:06.162401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.477 [2024-12-13 03:49:06.162415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.477 qpair failed and we were unable to recover it. 00:38:05.477 [2024-12-13 03:49:06.162524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.477 [2024-12-13 03:49:06.162539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.477 qpair failed and we were unable to recover it. 00:38:05.477 [2024-12-13 03:49:06.162609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.477 [2024-12-13 03:49:06.162640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.477 qpair failed and we were unable to recover it. 00:38:05.477 [2024-12-13 03:49:06.162716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.477 [2024-12-13 03:49:06.162730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.477 qpair failed and we were unable to recover it. 00:38:05.477 [2024-12-13 03:49:06.162812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.477 [2024-12-13 03:49:06.162826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.477 qpair failed and we were unable to recover it. 00:38:05.477 [2024-12-13 03:49:06.162976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.477 [2024-12-13 03:49:06.162992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.477 qpair failed and we were unable to recover it. 00:38:05.477 [2024-12-13 03:49:06.163070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.477 [2024-12-13 03:49:06.163084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.477 qpair failed and we were unable to recover it. 00:38:05.477 [2024-12-13 03:49:06.163154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.163167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.163314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.163328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.163423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.163437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.163505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.163519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.163655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.163669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.163742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.163757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.163836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.163851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.163937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.163952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.164026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.164040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.164195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.164209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.164355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.164369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.164501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.164516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.164585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.164600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.164672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.164686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.164849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.164864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.164937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.164952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.165094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.165110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.165253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.165268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.165371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.165384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.165470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.165484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.165638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.165652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.165737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.165750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.165832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.165845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.165929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.165943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.166012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.166025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.166164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.166177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.166272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.166286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.166357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.166370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.166507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.166522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.166596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.166609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.166686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.166700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.166843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.166858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.166927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.166941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.167017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.167031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.167253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.167267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.167433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.167447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.167597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.167611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.167759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.167775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.167937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.478 [2024-12-13 03:49:06.167953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.478 qpair failed and we were unable to recover it. 00:38:05.478 [2024-12-13 03:49:06.168043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.168058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.168126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.168140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.168272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.168286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.168359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.168373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.168578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.168594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.168676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.168691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.168785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.168799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.168965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.168984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.169081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.169096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.169244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.169258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.169394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.169409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.169549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.169563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.169645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.169660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.169757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.169771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.169839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.169853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.169948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.169963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.170032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.170046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.170122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.170140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.170274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.170287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.170374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.170390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.170526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.170545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.170618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.170631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.170787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.170800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.170876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.170890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.170965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.170978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.171120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.171133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.171198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.171211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.171283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.171297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.171382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.171395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.171469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.171482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.171617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.171630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.171702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.171716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.171782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.171795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.171868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.171881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.172036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.172051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.172140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.172153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.172227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.172241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.172310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.172323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.479 qpair failed and we were unable to recover it. 00:38:05.479 [2024-12-13 03:49:06.172394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.479 [2024-12-13 03:49:06.172407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.172507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.172521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.172604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.172618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.172688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.172701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.172774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.172787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.172856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.172870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.172962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.172977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.173124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.173138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.173306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.173319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.173459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.173472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.173551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.173565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.173641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.173654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.173725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.173738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.173881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.173895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.173984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.173999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.174073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.174086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.174173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.174186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.174260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.174274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.174344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.174357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.174424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.174440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.174581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.174595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.174667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.174680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.174775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.174791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.174859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.174872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.174962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.174977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.175049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.175063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.175149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.175162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.175233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.175246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.175404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.175418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.175509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.175522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.175653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.175667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.175731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.175744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.175896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.175910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.176008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.176022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.176173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.176187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.176322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.176335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.176401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.176413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.176559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.176572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.176641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.480 [2024-12-13 03:49:06.176666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.480 qpair failed and we were unable to recover it. 00:38:05.480 [2024-12-13 03:49:06.176868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.481 [2024-12-13 03:49:06.176880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.481 qpair failed and we were unable to recover it. 00:38:05.481 [2024-12-13 03:49:06.176954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.481 [2024-12-13 03:49:06.176967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.481 qpair failed and we were unable to recover it. 00:38:05.481 [2024-12-13 03:49:06.177054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.481 [2024-12-13 03:49:06.177066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.481 qpair failed and we were unable to recover it. 00:38:05.481 [2024-12-13 03:49:06.177231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.481 [2024-12-13 03:49:06.177244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.481 qpair failed and we were unable to recover it. 00:38:05.481 [2024-12-13 03:49:06.177329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.481 [2024-12-13 03:49:06.177342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.481 qpair failed and we were unable to recover it. 00:38:05.481 [2024-12-13 03:49:06.177496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.481 [2024-12-13 03:49:06.177509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.481 qpair failed and we were unable to recover it. 00:38:05.481 [2024-12-13 03:49:06.177609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.481 [2024-12-13 03:49:06.177622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.481 qpair failed and we were unable to recover it. 00:38:05.481 [2024-12-13 03:49:06.177709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.481 [2024-12-13 03:49:06.177746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.481 qpair failed and we were unable to recover it. 00:38:05.481 [2024-12-13 03:49:06.177811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.481 [2024-12-13 03:49:06.177824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.481 qpair failed and we were unable to recover it. 00:38:05.481 [2024-12-13 03:49:06.177895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.481 [2024-12-13 03:49:06.177909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.481 qpair failed and we were unable to recover it. 00:38:05.481 [2024-12-13 03:49:06.178005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.481 [2024-12-13 03:49:06.178018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.481 qpair failed and we were unable to recover it. 00:38:05.481 [2024-12-13 03:49:06.178085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.481 [2024-12-13 03:49:06.178098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.481 qpair failed and we were unable to recover it. 00:38:05.481 [2024-12-13 03:49:06.178260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.481 [2024-12-13 03:49:06.178274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.481 qpair failed and we were unable to recover it. 00:38:05.481 [2024-12-13 03:49:06.178370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.481 [2024-12-13 03:49:06.178384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.481 qpair failed and we were unable to recover it. 00:38:05.481 [2024-12-13 03:49:06.178454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.481 [2024-12-13 03:49:06.178467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.481 qpair failed and we were unable to recover it. 00:38:05.481 [2024-12-13 03:49:06.178538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.481 [2024-12-13 03:49:06.178552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.481 qpair failed and we were unable to recover it. 00:38:05.481 [2024-12-13 03:49:06.178632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.481 [2024-12-13 03:49:06.178646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.481 qpair failed and we were unable to recover it. 00:38:05.481 [2024-12-13 03:49:06.178786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.481 [2024-12-13 03:49:06.178800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.481 qpair failed and we were unable to recover it. 00:38:05.481 [2024-12-13 03:49:06.178905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.481 [2024-12-13 03:49:06.178924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.481 qpair failed and we were unable to recover it. 00:38:05.481 [2024-12-13 03:49:06.179076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.481 [2024-12-13 03:49:06.179089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.481 qpair failed and we were unable to recover it. 00:38:05.481 [2024-12-13 03:49:06.179182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.481 [2024-12-13 03:49:06.179198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.481 qpair failed and we were unable to recover it. 00:38:05.481 [2024-12-13 03:49:06.179343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.481 [2024-12-13 03:49:06.179357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.481 qpair failed and we were unable to recover it. 00:38:05.481 [2024-12-13 03:49:06.179442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.481 [2024-12-13 03:49:06.179456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.481 qpair failed and we were unable to recover it. 00:38:05.481 [2024-12-13 03:49:06.179526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.481 [2024-12-13 03:49:06.179539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.481 qpair failed and we were unable to recover it. 00:38:05.481 [2024-12-13 03:49:06.179674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.481 [2024-12-13 03:49:06.179687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.481 qpair failed and we were unable to recover it. 00:38:05.481 [2024-12-13 03:49:06.179766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.481 [2024-12-13 03:49:06.179779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.481 qpair failed and we were unable to recover it. 00:38:05.481 [2024-12-13 03:49:06.179934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.481 [2024-12-13 03:49:06.179949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.481 qpair failed and we were unable to recover it. 00:38:05.481 [2024-12-13 03:49:06.180218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.481 [2024-12-13 03:49:06.180231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.481 qpair failed and we were unable to recover it. 00:38:05.481 [2024-12-13 03:49:06.180297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.481 [2024-12-13 03:49:06.180310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.481 qpair failed and we were unable to recover it. 00:38:05.481 [2024-12-13 03:49:06.180376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.481 [2024-12-13 03:49:06.180389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.481 qpair failed and we were unable to recover it. 00:38:05.481 [2024-12-13 03:49:06.180459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.481 [2024-12-13 03:49:06.180473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.481 qpair failed and we were unable to recover it. 00:38:05.481 [2024-12-13 03:49:06.180539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.482 [2024-12-13 03:49:06.180552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.482 qpair failed and we were unable to recover it. 00:38:05.482 [2024-12-13 03:49:06.180634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.482 [2024-12-13 03:49:06.180646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.482 qpair failed and we were unable to recover it. 00:38:05.482 [2024-12-13 03:49:06.180725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.482 [2024-12-13 03:49:06.180739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.482 qpair failed and we were unable to recover it. 00:38:05.482 [2024-12-13 03:49:06.180970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.482 [2024-12-13 03:49:06.181002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.482 qpair failed and we were unable to recover it. 00:38:05.482 [2024-12-13 03:49:06.181075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.482 [2024-12-13 03:49:06.181088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.482 qpair failed and we were unable to recover it. 00:38:05.482 [2024-12-13 03:49:06.181223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.482 [2024-12-13 03:49:06.181238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.482 qpair failed and we were unable to recover it. 00:38:05.482 [2024-12-13 03:49:06.181379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.482 [2024-12-13 03:49:06.181393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.482 qpair failed and we were unable to recover it. 00:38:05.482 [2024-12-13 03:49:06.181468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.482 [2024-12-13 03:49:06.181482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.482 qpair failed and we were unable to recover it. 00:38:05.482 [2024-12-13 03:49:06.181635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.482 [2024-12-13 03:49:06.181654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.482 qpair failed and we were unable to recover it. 00:38:05.482 [2024-12-13 03:49:06.181791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.482 [2024-12-13 03:49:06.181805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.482 qpair failed and we were unable to recover it. 00:38:05.482 [2024-12-13 03:49:06.181947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.482 [2024-12-13 03:49:06.181960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.482 qpair failed and we were unable to recover it. 00:38:05.482 [2024-12-13 03:49:06.182039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.482 [2024-12-13 03:49:06.182052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.482 qpair failed and we were unable to recover it. 00:38:05.482 [2024-12-13 03:49:06.182130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.482 [2024-12-13 03:49:06.182143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.482 qpair failed and we were unable to recover it. 00:38:05.482 [2024-12-13 03:49:06.182222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.482 [2024-12-13 03:49:06.182236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.482 qpair failed and we were unable to recover it. 00:38:05.482 [2024-12-13 03:49:06.182379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.482 [2024-12-13 03:49:06.182392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.482 qpair failed and we were unable to recover it. 00:38:05.482 [2024-12-13 03:49:06.182533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.482 [2024-12-13 03:49:06.182547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.482 qpair failed and we were unable to recover it. 00:38:05.482 [2024-12-13 03:49:06.182641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.482 [2024-12-13 03:49:06.182655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.482 qpair failed and we were unable to recover it. 00:38:05.482 [2024-12-13 03:49:06.182735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.482 [2024-12-13 03:49:06.182750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.482 qpair failed and we were unable to recover it. 00:38:05.482 [2024-12-13 03:49:06.182827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.482 [2024-12-13 03:49:06.182842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.482 qpair failed and we were unable to recover it. 00:38:05.482 [2024-12-13 03:49:06.183052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.482 [2024-12-13 03:49:06.183067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.482 qpair failed and we were unable to recover it. 00:38:05.482 [2024-12-13 03:49:06.183201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.482 [2024-12-13 03:49:06.183215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.482 qpair failed and we were unable to recover it. 00:38:05.482 [2024-12-13 03:49:06.183302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.482 [2024-12-13 03:49:06.183317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.482 qpair failed and we were unable to recover it. 00:38:05.482 [2024-12-13 03:49:06.183461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.482 [2024-12-13 03:49:06.183474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.482 qpair failed and we were unable to recover it. 00:38:05.482 [2024-12-13 03:49:06.183559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.482 [2024-12-13 03:49:06.183573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.482 qpair failed and we were unable to recover it. 00:38:05.482 [2024-12-13 03:49:06.183794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.482 [2024-12-13 03:49:06.183808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.482 qpair failed and we were unable to recover it. 00:38:05.482 [2024-12-13 03:49:06.183981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.482 [2024-12-13 03:49:06.183995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.482 qpair failed and we were unable to recover it. 00:38:05.482 [2024-12-13 03:49:06.184108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.482 [2024-12-13 03:49:06.184123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.482 qpair failed and we were unable to recover it. 00:38:05.482 [2024-12-13 03:49:06.184327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.482 [2024-12-13 03:49:06.184344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.482 qpair failed and we were unable to recover it. 00:38:05.482 [2024-12-13 03:49:06.184419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.482 [2024-12-13 03:49:06.184433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.482 qpair failed and we were unable to recover it. 00:38:05.482 [2024-12-13 03:49:06.184518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.482 [2024-12-13 03:49:06.184534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.482 qpair failed and we were unable to recover it. 00:38:05.482 [2024-12-13 03:49:06.184734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.482 [2024-12-13 03:49:06.184748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.482 qpair failed and we were unable to recover it. 00:38:05.482 [2024-12-13 03:49:06.184966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.482 [2024-12-13 03:49:06.185011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.482 qpair failed and we were unable to recover it. 00:38:05.482 [2024-12-13 03:49:06.185221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.482 [2024-12-13 03:49:06.185262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.482 qpair failed and we were unable to recover it. 00:38:05.482 [2024-12-13 03:49:06.185417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.482 [2024-12-13 03:49:06.185457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.482 qpair failed and we were unable to recover it. 00:38:05.482 [2024-12-13 03:49:06.185587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.482 [2024-12-13 03:49:06.185627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.482 qpair failed and we were unable to recover it. 00:38:05.482 [2024-12-13 03:49:06.185784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.483 [2024-12-13 03:49:06.185825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.483 qpair failed and we were unable to recover it. 00:38:05.483 [2024-12-13 03:49:06.186091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.483 [2024-12-13 03:49:06.186133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.483 qpair failed and we were unable to recover it. 00:38:05.483 [2024-12-13 03:49:06.186314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.483 [2024-12-13 03:49:06.186327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.483 qpair failed and we were unable to recover it. 00:38:05.483 [2024-12-13 03:49:06.186535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.483 [2024-12-13 03:49:06.186576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.483 qpair failed and we were unable to recover it. 00:38:05.483 [2024-12-13 03:49:06.186773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.483 [2024-12-13 03:49:06.186818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.483 qpair failed and we were unable to recover it. 00:38:05.483 [2024-12-13 03:49:06.187031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.483 [2024-12-13 03:49:06.187087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.483 qpair failed and we were unable to recover it. 00:38:05.483 [2024-12-13 03:49:06.187275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.483 [2024-12-13 03:49:06.187317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.483 qpair failed and we were unable to recover it. 00:38:05.483 [2024-12-13 03:49:06.187576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.483 [2024-12-13 03:49:06.187617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.483 qpair failed and we were unable to recover it. 00:38:05.483 [2024-12-13 03:49:06.187779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.483 [2024-12-13 03:49:06.187822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.483 qpair failed and we were unable to recover it. 00:38:05.483 [2024-12-13 03:49:06.188104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.483 [2024-12-13 03:49:06.188148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.483 qpair failed and we were unable to recover it. 00:38:05.483 [2024-12-13 03:49:06.188270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.483 [2024-12-13 03:49:06.188312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.483 qpair failed and we were unable to recover it. 00:38:05.483 [2024-12-13 03:49:06.188550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.483 [2024-12-13 03:49:06.188565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.483 qpair failed and we were unable to recover it. 00:38:05.483 [2024-12-13 03:49:06.188744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.483 [2024-12-13 03:49:06.188758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.483 qpair failed and we were unable to recover it. 00:38:05.483 [2024-12-13 03:49:06.188948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.483 [2024-12-13 03:49:06.188991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.483 qpair failed and we were unable to recover it. 00:38:05.483 [2024-12-13 03:49:06.189200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.483 [2024-12-13 03:49:06.189241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.483 qpair failed and we were unable to recover it. 00:38:05.483 [2024-12-13 03:49:06.189461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.483 [2024-12-13 03:49:06.189501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.483 qpair failed and we were unable to recover it. 00:38:05.483 [2024-12-13 03:49:06.189680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.483 [2024-12-13 03:49:06.189720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.483 qpair failed and we were unable to recover it. 00:38:05.483 [2024-12-13 03:49:06.189912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.483 [2024-12-13 03:49:06.190025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.483 qpair failed and we were unable to recover it. 00:38:05.483 [2024-12-13 03:49:06.190230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.483 [2024-12-13 03:49:06.190244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.483 qpair failed and we were unable to recover it. 00:38:05.483 [2024-12-13 03:49:06.190428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.483 [2024-12-13 03:49:06.190470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.483 qpair failed and we were unable to recover it. 00:38:05.483 [2024-12-13 03:49:06.190665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.483 [2024-12-13 03:49:06.190707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.483 qpair failed and we were unable to recover it. 00:38:05.483 [2024-12-13 03:49:06.190863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.483 [2024-12-13 03:49:06.190914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.483 qpair failed and we were unable to recover it. 00:38:05.483 [2024-12-13 03:49:06.191162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.483 [2024-12-13 03:49:06.191204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.483 qpair failed and we were unable to recover it. 00:38:05.483 [2024-12-13 03:49:06.191398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.483 [2024-12-13 03:49:06.191441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.483 qpair failed and we were unable to recover it. 00:38:05.483 [2024-12-13 03:49:06.191603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.483 [2024-12-13 03:49:06.191626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.483 qpair failed and we were unable to recover it. 00:38:05.483 [2024-12-13 03:49:06.191851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.483 [2024-12-13 03:49:06.191870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.483 qpair failed and we were unable to recover it. 00:38:05.483 [2024-12-13 03:49:06.192075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.483 [2024-12-13 03:49:06.192118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.483 qpair failed and we were unable to recover it. 00:38:05.483 [2024-12-13 03:49:06.192253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.483 [2024-12-13 03:49:06.192294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.483 qpair failed and we were unable to recover it. 00:38:05.483 [2024-12-13 03:49:06.192512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.483 [2024-12-13 03:49:06.192554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.483 qpair failed and we were unable to recover it. 00:38:05.483 [2024-12-13 03:49:06.192776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.483 [2024-12-13 03:49:06.192821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.483 qpair failed and we were unable to recover it. 00:38:05.483 [2024-12-13 03:49:06.193052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.483 [2024-12-13 03:49:06.193092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.483 qpair failed and we were unable to recover it. 00:38:05.483 [2024-12-13 03:49:06.193351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.483 [2024-12-13 03:49:06.193392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.483 qpair failed and we were unable to recover it. 00:38:05.483 [2024-12-13 03:49:06.193535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.483 [2024-12-13 03:49:06.193556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.483 qpair failed and we were unable to recover it. 00:38:05.483 [2024-12-13 03:49:06.193763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.483 [2024-12-13 03:49:06.193793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.483 qpair failed and we were unable to recover it. 00:38:05.483 [2024-12-13 03:49:06.193900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.483 [2024-12-13 03:49:06.193928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.484 qpair failed and we were unable to recover it. 00:38:05.484 [2024-12-13 03:49:06.194081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.484 [2024-12-13 03:49:06.194101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.484 qpair failed and we were unable to recover it. 00:38:05.484 [2024-12-13 03:49:06.194290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.484 [2024-12-13 03:49:06.194311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.484 qpair failed and we were unable to recover it. 00:38:05.484 [2024-12-13 03:49:06.194395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.484 [2024-12-13 03:49:06.194415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.484 qpair failed and we were unable to recover it. 00:38:05.484 [2024-12-13 03:49:06.194584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.484 [2024-12-13 03:49:06.194604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.484 qpair failed and we were unable to recover it. 00:38:05.484 [2024-12-13 03:49:06.194710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.484 [2024-12-13 03:49:06.194730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.484 qpair failed and we were unable to recover it. 00:38:05.484 [2024-12-13 03:49:06.194853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.484 [2024-12-13 03:49:06.194875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.484 qpair failed and we were unable to recover it. 00:38:05.484 [2024-12-13 03:49:06.195100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.484 [2024-12-13 03:49:06.195121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.484 qpair failed and we were unable to recover it. 00:38:05.484 [2024-12-13 03:49:06.195226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.484 [2024-12-13 03:49:06.195246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.484 qpair failed and we were unable to recover it. 00:38:05.484 [2024-12-13 03:49:06.195333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.484 [2024-12-13 03:49:06.195353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.484 qpair failed and we were unable to recover it. 00:38:05.484 [2024-12-13 03:49:06.195466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.484 [2024-12-13 03:49:06.195488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.484 qpair failed and we were unable to recover it. 00:38:05.484 [2024-12-13 03:49:06.195590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.484 [2024-12-13 03:49:06.195606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.484 qpair failed and we were unable to recover it. 00:38:05.484 [2024-12-13 03:49:06.195689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.484 [2024-12-13 03:49:06.195701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.484 qpair failed and we were unable to recover it. 00:38:05.484 [2024-12-13 03:49:06.195841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.484 [2024-12-13 03:49:06.195855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.484 qpair failed and we were unable to recover it. 00:38:05.484 [2024-12-13 03:49:06.195942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.484 [2024-12-13 03:49:06.195956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.484 qpair failed and we were unable to recover it. 00:38:05.484 [2024-12-13 03:49:06.196051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.484 [2024-12-13 03:49:06.196067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.484 qpair failed and we were unable to recover it. 00:38:05.484 [2024-12-13 03:49:06.196170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.484 [2024-12-13 03:49:06.196185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.484 qpair failed and we were unable to recover it. 00:38:05.484 [2024-12-13 03:49:06.196334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.484 [2024-12-13 03:49:06.196349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.484 qpair failed and we were unable to recover it. 00:38:05.484 [2024-12-13 03:49:06.196486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.484 [2024-12-13 03:49:06.196501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.484 qpair failed and we were unable to recover it. 00:38:05.484 [2024-12-13 03:49:06.196581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.484 [2024-12-13 03:49:06.196594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.484 qpair failed and we were unable to recover it. 00:38:05.484 [2024-12-13 03:49:06.196741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.484 [2024-12-13 03:49:06.196756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.484 qpair failed and we were unable to recover it. 00:38:05.484 [2024-12-13 03:49:06.196838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.484 [2024-12-13 03:49:06.196852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.484 qpair failed and we were unable to recover it. 00:38:05.484 [2024-12-13 03:49:06.196937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.484 [2024-12-13 03:49:06.196951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.484 qpair failed and we were unable to recover it. 00:38:05.484 [2024-12-13 03:49:06.197024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.484 [2024-12-13 03:49:06.197036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.484 qpair failed and we were unable to recover it. 00:38:05.484 [2024-12-13 03:49:06.197188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.484 [2024-12-13 03:49:06.197202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.484 qpair failed and we were unable to recover it. 00:38:05.484 [2024-12-13 03:49:06.197419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.484 [2024-12-13 03:49:06.197433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.484 qpair failed and we were unable to recover it. 00:38:05.484 [2024-12-13 03:49:06.197526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.484 [2024-12-13 03:49:06.197539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.484 qpair failed and we were unable to recover it. 00:38:05.484 [2024-12-13 03:49:06.197680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.484 [2024-12-13 03:49:06.197694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.484 qpair failed and we were unable to recover it. 00:38:05.484 [2024-12-13 03:49:06.197801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.484 [2024-12-13 03:49:06.197815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.484 qpair failed and we were unable to recover it. 00:38:05.484 [2024-12-13 03:49:06.197899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.484 [2024-12-13 03:49:06.197912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.484 qpair failed and we were unable to recover it. 00:38:05.484 [2024-12-13 03:49:06.198065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.484 [2024-12-13 03:49:06.198079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.484 qpair failed and we were unable to recover it. 00:38:05.484 [2024-12-13 03:49:06.198178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.484 [2024-12-13 03:49:06.198193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.484 qpair failed and we were unable to recover it. 00:38:05.484 [2024-12-13 03:49:06.198277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.484 [2024-12-13 03:49:06.198290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.484 qpair failed and we were unable to recover it. 00:38:05.484 [2024-12-13 03:49:06.198499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.484 [2024-12-13 03:49:06.198513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.484 qpair failed and we were unable to recover it. 00:38:05.484 [2024-12-13 03:49:06.198667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.484 [2024-12-13 03:49:06.198682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.484 qpair failed and we were unable to recover it. 00:38:05.484 [2024-12-13 03:49:06.198824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.485 [2024-12-13 03:49:06.198838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.485 qpair failed and we were unable to recover it. 00:38:05.485 [2024-12-13 03:49:06.199035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.485 [2024-12-13 03:49:06.199049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.485 qpair failed and we were unable to recover it. 00:38:05.485 [2024-12-13 03:49:06.199182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.485 [2024-12-13 03:49:06.199195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.485 qpair failed and we were unable to recover it. 00:38:05.485 [2024-12-13 03:49:06.199271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.485 [2024-12-13 03:49:06.199284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.485 qpair failed and we were unable to recover it. 00:38:05.485 [2024-12-13 03:49:06.199433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.485 [2024-12-13 03:49:06.199447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.485 qpair failed and we were unable to recover it. 00:38:05.485 [2024-12-13 03:49:06.199575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.485 [2024-12-13 03:49:06.199590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.485 qpair failed and we were unable to recover it. 00:38:05.485 [2024-12-13 03:49:06.199665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.485 [2024-12-13 03:49:06.199678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.485 qpair failed and we were unable to recover it. 00:38:05.485 [2024-12-13 03:49:06.199932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.485 [2024-12-13 03:49:06.199947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.485 qpair failed and we were unable to recover it. 00:38:05.485 [2024-12-13 03:49:06.200087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.485 [2024-12-13 03:49:06.200100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.485 qpair failed and we were unable to recover it. 00:38:05.485 [2024-12-13 03:49:06.200241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.485 [2024-12-13 03:49:06.200255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.485 qpair failed and we were unable to recover it. 00:38:05.485 [2024-12-13 03:49:06.200352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.485 [2024-12-13 03:49:06.200367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.485 qpair failed and we were unable to recover it. 00:38:05.485 [2024-12-13 03:49:06.200456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.485 [2024-12-13 03:49:06.200469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.485 qpair failed and we were unable to recover it. 00:38:05.485 [2024-12-13 03:49:06.200619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.485 [2024-12-13 03:49:06.200632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.485 qpair failed and we were unable to recover it. 00:38:05.485 [2024-12-13 03:49:06.200714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.485 [2024-12-13 03:49:06.200727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.485 qpair failed and we were unable to recover it. 00:38:05.485 [2024-12-13 03:49:06.200929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.485 [2024-12-13 03:49:06.200943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.485 qpair failed and we were unable to recover it. 00:38:05.485 [2024-12-13 03:49:06.201021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.485 [2024-12-13 03:49:06.201033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.485 qpair failed and we were unable to recover it. 00:38:05.485 [2024-12-13 03:49:06.201174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.485 [2024-12-13 03:49:06.201187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.485 qpair failed and we were unable to recover it. 00:38:05.485 [2024-12-13 03:49:06.201412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.485 [2024-12-13 03:49:06.201426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.485 qpair failed and we were unable to recover it. 00:38:05.485 [2024-12-13 03:49:06.201506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.485 [2024-12-13 03:49:06.201518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.485 qpair failed and we were unable to recover it. 00:38:05.485 [2024-12-13 03:49:06.201669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.485 [2024-12-13 03:49:06.201684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.485 qpair failed and we were unable to recover it. 00:38:05.485 [2024-12-13 03:49:06.201774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.485 [2024-12-13 03:49:06.201787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.485 qpair failed and we were unable to recover it. 00:38:05.485 [2024-12-13 03:49:06.201889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.485 [2024-12-13 03:49:06.201903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.485 qpair failed and we were unable to recover it. 00:38:05.485 [2024-12-13 03:49:06.201993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.485 [2024-12-13 03:49:06.202007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.485 qpair failed and we were unable to recover it. 00:38:05.485 [2024-12-13 03:49:06.202116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.485 [2024-12-13 03:49:06.202131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.485 qpair failed and we were unable to recover it. 00:38:05.485 [2024-12-13 03:49:06.202269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.485 [2024-12-13 03:49:06.202288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.485 qpair failed and we were unable to recover it. 00:38:05.485 [2024-12-13 03:49:06.202428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.485 [2024-12-13 03:49:06.202442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.485 qpair failed and we were unable to recover it. 00:38:05.485 [2024-12-13 03:49:06.202510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.485 [2024-12-13 03:49:06.202523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.485 qpair failed and we were unable to recover it. 00:38:05.485 [2024-12-13 03:49:06.202602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.485 [2024-12-13 03:49:06.202615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.485 qpair failed and we were unable to recover it. 00:38:05.485 [2024-12-13 03:49:06.202696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.485 [2024-12-13 03:49:06.202709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.485 qpair failed and we were unable to recover it. 00:38:05.485 [2024-12-13 03:49:06.202852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.485 [2024-12-13 03:49:06.202868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.485 qpair failed and we were unable to recover it. 00:38:05.485 [2024-12-13 03:49:06.203028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.485 [2024-12-13 03:49:06.203042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.485 qpair failed and we were unable to recover it. 00:38:05.485 [2024-12-13 03:49:06.203185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.485 [2024-12-13 03:49:06.203198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.485 qpair failed and we were unable to recover it. 00:38:05.485 [2024-12-13 03:49:06.203269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.485 [2024-12-13 03:49:06.203287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.485 qpair failed and we were unable to recover it. 00:38:05.485 [2024-12-13 03:49:06.203365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.485 [2024-12-13 03:49:06.203378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.485 qpair failed and we were unable to recover it. 00:38:05.485 [2024-12-13 03:49:06.203458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.485 [2024-12-13 03:49:06.203471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.486 qpair failed and we were unable to recover it. 00:38:05.486 [2024-12-13 03:49:06.203556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.486 [2024-12-13 03:49:06.203570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.486 qpair failed and we were unable to recover it. 00:38:05.486 [2024-12-13 03:49:06.203638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.486 [2024-12-13 03:49:06.203650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.486 qpair failed and we were unable to recover it. 00:38:05.486 [2024-12-13 03:49:06.203717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.486 [2024-12-13 03:49:06.203729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.486 qpair failed and we were unable to recover it. 00:38:05.486 [2024-12-13 03:49:06.203859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.486 [2024-12-13 03:49:06.203872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.486 qpair failed and we were unable to recover it. 00:38:05.486 [2024-12-13 03:49:06.203948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.486 [2024-12-13 03:49:06.203960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.486 qpair failed and we were unable to recover it. 00:38:05.486 [2024-12-13 03:49:06.204029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.486 [2024-12-13 03:49:06.204042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.486 qpair failed and we were unable to recover it. 00:38:05.486 [2024-12-13 03:49:06.204179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.486 [2024-12-13 03:49:06.204191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.486 qpair failed and we were unable to recover it. 00:38:05.486 [2024-12-13 03:49:06.204276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.486 [2024-12-13 03:49:06.204288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.486 qpair failed and we were unable to recover it. 00:38:05.486 [2024-12-13 03:49:06.204502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.486 [2024-12-13 03:49:06.204516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.486 qpair failed and we were unable to recover it. 00:38:05.486 [2024-12-13 03:49:06.204598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.486 [2024-12-13 03:49:06.204611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.486 qpair failed and we were unable to recover it. 00:38:05.486 [2024-12-13 03:49:06.204715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.486 [2024-12-13 03:49:06.204729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.486 qpair failed and we were unable to recover it. 00:38:05.486 [2024-12-13 03:49:06.204884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.486 [2024-12-13 03:49:06.204899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.486 qpair failed and we were unable to recover it. 00:38:05.486 [2024-12-13 03:49:06.204981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.486 [2024-12-13 03:49:06.204995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.486 qpair failed and we were unable to recover it. 00:38:05.486 [2024-12-13 03:49:06.205075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.486 [2024-12-13 03:49:06.205089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.486 qpair failed and we were unable to recover it. 00:38:05.486 [2024-12-13 03:49:06.205227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.486 [2024-12-13 03:49:06.205242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.486 qpair failed and we were unable to recover it. 00:38:05.486 [2024-12-13 03:49:06.205311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.486 [2024-12-13 03:49:06.205323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.486 qpair failed and we were unable to recover it. 00:38:05.486 [2024-12-13 03:49:06.205392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.486 [2024-12-13 03:49:06.205405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.486 qpair failed and we were unable to recover it. 00:38:05.486 [2024-12-13 03:49:06.205504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.486 [2024-12-13 03:49:06.205516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.486 qpair failed and we were unable to recover it. 00:38:05.486 [2024-12-13 03:49:06.205585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.486 [2024-12-13 03:49:06.205599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.486 qpair failed and we were unable to recover it. 00:38:05.486 [2024-12-13 03:49:06.205739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.486 [2024-12-13 03:49:06.205752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.486 qpair failed and we were unable to recover it. 00:38:05.486 [2024-12-13 03:49:06.205967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.486 [2024-12-13 03:49:06.205983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.486 qpair failed and we were unable to recover it. 00:38:05.486 [2024-12-13 03:49:06.206083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.486 [2024-12-13 03:49:06.206097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.486 qpair failed and we were unable to recover it. 00:38:05.486 [2024-12-13 03:49:06.206317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.486 [2024-12-13 03:49:06.206332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.486 qpair failed and we were unable to recover it. 00:38:05.486 [2024-12-13 03:49:06.206412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.486 [2024-12-13 03:49:06.206425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.486 qpair failed and we were unable to recover it. 00:38:05.486 [2024-12-13 03:49:06.206565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.486 [2024-12-13 03:49:06.206579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.486 qpair failed and we were unable to recover it. 00:38:05.486 [2024-12-13 03:49:06.206714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.486 [2024-12-13 03:49:06.206729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.486 qpair failed and we were unable to recover it. 00:38:05.486 [2024-12-13 03:49:06.206883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.486 [2024-12-13 03:49:06.206896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.486 qpair failed and we were unable to recover it. 00:38:05.486 [2024-12-13 03:49:06.206989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.486 [2024-12-13 03:49:06.207003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.486 qpair failed and we were unable to recover it. 00:38:05.486 [2024-12-13 03:49:06.207156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.486 [2024-12-13 03:49:06.207170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.486 qpair failed and we were unable to recover it. 00:38:05.486 [2024-12-13 03:49:06.207251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.486 [2024-12-13 03:49:06.207263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.486 qpair failed and we were unable to recover it. 00:38:05.486 [2024-12-13 03:49:06.207403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.486 [2024-12-13 03:49:06.207417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.486 qpair failed and we were unable to recover it. 00:38:05.487 [2024-12-13 03:49:06.207487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.487 [2024-12-13 03:49:06.207500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.487 qpair failed and we were unable to recover it. 00:38:05.487 [2024-12-13 03:49:06.207573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.487 [2024-12-13 03:49:06.207585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.487 qpair failed and we were unable to recover it. 00:38:05.487 [2024-12-13 03:49:06.207720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.487 [2024-12-13 03:49:06.207734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.487 qpair failed and we were unable to recover it. 00:38:05.487 [2024-12-13 03:49:06.207795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.487 [2024-12-13 03:49:06.207807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.487 qpair failed and we were unable to recover it. 00:38:05.487 [2024-12-13 03:49:06.207942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.487 [2024-12-13 03:49:06.207974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.487 qpair failed and we were unable to recover it. 00:38:05.487 [2024-12-13 03:49:06.208123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.487 [2024-12-13 03:49:06.208137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.487 qpair failed and we were unable to recover it. 00:38:05.487 [2024-12-13 03:49:06.208302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.487 [2024-12-13 03:49:06.208325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.487 qpair failed and we were unable to recover it. 00:38:05.487 [2024-12-13 03:49:06.208411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.487 [2024-12-13 03:49:06.208431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.487 qpair failed and we were unable to recover it. 00:38:05.487 [2024-12-13 03:49:06.208595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.487 [2024-12-13 03:49:06.208616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.487 qpair failed and we were unable to recover it. 00:38:05.487 [2024-12-13 03:49:06.208851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.487 [2024-12-13 03:49:06.208866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.487 qpair failed and we were unable to recover it. 00:38:05.487 [2024-12-13 03:49:06.209043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.487 [2024-12-13 03:49:06.209058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.487 qpair failed and we were unable to recover it. 00:38:05.487 [2024-12-13 03:49:06.209152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.487 [2024-12-13 03:49:06.209165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.487 qpair failed and we were unable to recover it. 00:38:05.487 [2024-12-13 03:49:06.209266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.487 [2024-12-13 03:49:06.209280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.487 qpair failed and we were unable to recover it. 00:38:05.487 [2024-12-13 03:49:06.209486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.487 [2024-12-13 03:49:06.209500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.487 qpair failed and we were unable to recover it. 00:38:05.487 [2024-12-13 03:49:06.209579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.487 [2024-12-13 03:49:06.209592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.487 qpair failed and we were unable to recover it. 00:38:05.487 [2024-12-13 03:49:06.209754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.487 [2024-12-13 03:49:06.209768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.487 qpair failed and we were unable to recover it. 00:38:05.487 [2024-12-13 03:49:06.209846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.487 [2024-12-13 03:49:06.209860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.487 qpair failed and we were unable to recover it. 00:38:05.487 [2024-12-13 03:49:06.210006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.487 [2024-12-13 03:49:06.210021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.487 qpair failed and we were unable to recover it. 00:38:05.487 [2024-12-13 03:49:06.210168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.487 [2024-12-13 03:49:06.210181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.487 qpair failed and we were unable to recover it. 00:38:05.487 [2024-12-13 03:49:06.210243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.487 [2024-12-13 03:49:06.210259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.487 qpair failed and we were unable to recover it. 00:38:05.487 [2024-12-13 03:49:06.210417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.487 [2024-12-13 03:49:06.210432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.487 qpair failed and we were unable to recover it. 00:38:05.487 [2024-12-13 03:49:06.210544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.487 [2024-12-13 03:49:06.210560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.487 qpair failed and we were unable to recover it. 00:38:05.487 [2024-12-13 03:49:06.210630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.487 [2024-12-13 03:49:06.210643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.487 qpair failed and we were unable to recover it. 00:38:05.487 [2024-12-13 03:49:06.210778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.487 [2024-12-13 03:49:06.210791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.487 qpair failed and we were unable to recover it. 00:38:05.487 [2024-12-13 03:49:06.210996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.487 [2024-12-13 03:49:06.211011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.487 qpair failed and we were unable to recover it. 00:38:05.487 [2024-12-13 03:49:06.211094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.487 [2024-12-13 03:49:06.211112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.487 qpair failed and we were unable to recover it. 00:38:05.487 [2024-12-13 03:49:06.211335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.487 [2024-12-13 03:49:06.211348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.487 qpair failed and we were unable to recover it. 00:38:05.487 [2024-12-13 03:49:06.211424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.487 [2024-12-13 03:49:06.211437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.487 qpair failed and we were unable to recover it. 00:38:05.487 [2024-12-13 03:49:06.211511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.487 [2024-12-13 03:49:06.211524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.487 qpair failed and we were unable to recover it. 00:38:05.487 [2024-12-13 03:49:06.211673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.487 [2024-12-13 03:49:06.211687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.487 qpair failed and we were unable to recover it. 00:38:05.487 [2024-12-13 03:49:06.211785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.487 [2024-12-13 03:49:06.211798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.487 qpair failed and we were unable to recover it. 00:38:05.487 [2024-12-13 03:49:06.211889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.487 [2024-12-13 03:49:06.211923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.487 qpair failed and we were unable to recover it. 00:38:05.487 [2024-12-13 03:49:06.211997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.487 [2024-12-13 03:49:06.212009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.487 qpair failed and we were unable to recover it. 00:38:05.487 [2024-12-13 03:49:06.212080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.487 [2024-12-13 03:49:06.212094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.487 qpair failed and we were unable to recover it. 00:38:05.487 [2024-12-13 03:49:06.212244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.488 [2024-12-13 03:49:06.212258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.488 qpair failed and we were unable to recover it. 00:38:05.488 [2024-12-13 03:49:06.212342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.488 [2024-12-13 03:49:06.212357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.488 qpair failed and we were unable to recover it. 00:38:05.488 [2024-12-13 03:49:06.212434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.488 [2024-12-13 03:49:06.212448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.488 qpair failed and we were unable to recover it. 00:38:05.488 [2024-12-13 03:49:06.212595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.488 [2024-12-13 03:49:06.212610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.488 qpair failed and we were unable to recover it. 00:38:05.488 [2024-12-13 03:49:06.212810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.488 [2024-12-13 03:49:06.212823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.488 qpair failed and we were unable to recover it. 00:38:05.488 [2024-12-13 03:49:06.212955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.488 [2024-12-13 03:49:06.212969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.488 qpair failed and we were unable to recover it. 00:38:05.488 [2024-12-13 03:49:06.213112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.488 [2024-12-13 03:49:06.213126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.488 qpair failed and we were unable to recover it. 00:38:05.488 [2024-12-13 03:49:06.213286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.488 [2024-12-13 03:49:06.213300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.488 qpair failed and we were unable to recover it. 00:38:05.488 [2024-12-13 03:49:06.213444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.488 [2024-12-13 03:49:06.213460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.488 qpair failed and we were unable to recover it. 00:38:05.488 [2024-12-13 03:49:06.213604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.488 [2024-12-13 03:49:06.213617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.488 qpair failed and we were unable to recover it. 00:38:05.488 [2024-12-13 03:49:06.213766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.488 [2024-12-13 03:49:06.213781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.488 qpair failed and we were unable to recover it. 00:38:05.488 [2024-12-13 03:49:06.213935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.488 [2024-12-13 03:49:06.213950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.488 qpair failed and we were unable to recover it. 00:38:05.488 [2024-12-13 03:49:06.214051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.488 [2024-12-13 03:49:06.214084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.488 qpair failed and we were unable to recover it. 00:38:05.488 [2024-12-13 03:49:06.214248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.488 [2024-12-13 03:49:06.214270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.488 qpair failed and we were unable to recover it. 00:38:05.488 [2024-12-13 03:49:06.214418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.488 [2024-12-13 03:49:06.214440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.488 qpair failed and we were unable to recover it. 00:38:05.488 [2024-12-13 03:49:06.214621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.488 [2024-12-13 03:49:06.214642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.488 qpair failed and we were unable to recover it. 00:38:05.488 [2024-12-13 03:49:06.214734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.488 [2024-12-13 03:49:06.214754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.488 qpair failed and we were unable to recover it. 00:38:05.488 [2024-12-13 03:49:06.214933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.488 [2024-12-13 03:49:06.214956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.488 qpair failed and we were unable to recover it. 00:38:05.488 [2024-12-13 03:49:06.215048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.488 [2024-12-13 03:49:06.215064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.488 qpair failed and we were unable to recover it. 00:38:05.488 [2024-12-13 03:49:06.215215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.488 [2024-12-13 03:49:06.215229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.488 qpair failed and we were unable to recover it. 00:38:05.488 [2024-12-13 03:49:06.215320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.488 [2024-12-13 03:49:06.215334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.488 qpair failed and we were unable to recover it. 00:38:05.488 [2024-12-13 03:49:06.215471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.488 [2024-12-13 03:49:06.215485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.488 qpair failed and we were unable to recover it. 00:38:05.488 [2024-12-13 03:49:06.215581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.488 [2024-12-13 03:49:06.215596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.488 qpair failed and we were unable to recover it. 00:38:05.488 [2024-12-13 03:49:06.215731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.488 [2024-12-13 03:49:06.215745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.488 qpair failed and we were unable to recover it. 00:38:05.488 [2024-12-13 03:49:06.215834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.488 [2024-12-13 03:49:06.215849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.488 qpair failed and we were unable to recover it. 00:38:05.488 [2024-12-13 03:49:06.215943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.488 [2024-12-13 03:49:06.215960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.488 qpair failed and we were unable to recover it. 00:38:05.488 [2024-12-13 03:49:06.216029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.488 [2024-12-13 03:49:06.216043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.488 qpair failed and we were unable to recover it. 00:38:05.488 [2024-12-13 03:49:06.216196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.488 [2024-12-13 03:49:06.216210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.488 qpair failed and we were unable to recover it. 00:38:05.488 [2024-12-13 03:49:06.216283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.488 [2024-12-13 03:49:06.216298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.488 qpair failed and we were unable to recover it. 00:38:05.488 [2024-12-13 03:49:06.216391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.488 [2024-12-13 03:49:06.216404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.488 qpair failed and we were unable to recover it. 00:38:05.488 [2024-12-13 03:49:06.216499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.488 [2024-12-13 03:49:06.216514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.488 qpair failed and we were unable to recover it. 00:38:05.488 [2024-12-13 03:49:06.216596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.488 [2024-12-13 03:49:06.216611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.488 qpair failed and we were unable to recover it. 00:38:05.488 [2024-12-13 03:49:06.216751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.488 [2024-12-13 03:49:06.216765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.488 qpair failed and we were unable to recover it. 00:38:05.488 [2024-12-13 03:49:06.216922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.488 [2024-12-13 03:49:06.216937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.488 qpair failed and we were unable to recover it. 00:38:05.488 [2024-12-13 03:49:06.217146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.489 [2024-12-13 03:49:06.217160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.489 qpair failed and we were unable to recover it. 00:38:05.489 [2024-12-13 03:49:06.217227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.489 [2024-12-13 03:49:06.217240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.489 qpair failed and we were unable to recover it. 00:38:05.489 [2024-12-13 03:49:06.217320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.489 [2024-12-13 03:49:06.217335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.489 qpair failed and we were unable to recover it. 00:38:05.489 [2024-12-13 03:49:06.217430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.489 [2024-12-13 03:49:06.217444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.489 qpair failed and we were unable to recover it. 00:38:05.489 [2024-12-13 03:49:06.217606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.489 [2024-12-13 03:49:06.217621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.489 qpair failed and we were unable to recover it. 00:38:05.489 [2024-12-13 03:49:06.217797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.489 [2024-12-13 03:49:06.217810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.489 qpair failed and we were unable to recover it. 00:38:05.489 [2024-12-13 03:49:06.217966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.489 [2024-12-13 03:49:06.217981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.489 qpair failed and we were unable to recover it. 00:38:05.489 [2024-12-13 03:49:06.218116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.489 [2024-12-13 03:49:06.218130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.489 qpair failed and we were unable to recover it. 00:38:05.489 [2024-12-13 03:49:06.218214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.489 [2024-12-13 03:49:06.218228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.489 qpair failed and we were unable to recover it. 00:38:05.489 [2024-12-13 03:49:06.218297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.489 [2024-12-13 03:49:06.218311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.489 qpair failed and we were unable to recover it. 00:38:05.489 [2024-12-13 03:49:06.218384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.489 [2024-12-13 03:49:06.218398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.489 qpair failed and we were unable to recover it. 00:38:05.489 [2024-12-13 03:49:06.218477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.489 [2024-12-13 03:49:06.218491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.489 qpair failed and we were unable to recover it. 00:38:05.489 [2024-12-13 03:49:06.218557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.489 [2024-12-13 03:49:06.218570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.489 qpair failed and we were unable to recover it. 00:38:05.489 [2024-12-13 03:49:06.218769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.489 [2024-12-13 03:49:06.218784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.489 qpair failed and we were unable to recover it. 00:38:05.489 [2024-12-13 03:49:06.219004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.489 [2024-12-13 03:49:06.219018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.489 qpair failed and we were unable to recover it. 00:38:05.489 [2024-12-13 03:49:06.219098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.489 [2024-12-13 03:49:06.219113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.489 qpair failed and we were unable to recover it. 00:38:05.489 [2024-12-13 03:49:06.219187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.489 [2024-12-13 03:49:06.219200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.489 qpair failed and we were unable to recover it. 00:38:05.489 [2024-12-13 03:49:06.219330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.489 [2024-12-13 03:49:06.219344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.489 qpair failed and we were unable to recover it. 00:38:05.489 [2024-12-13 03:49:06.219492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.489 [2024-12-13 03:49:06.219507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.489 qpair failed and we were unable to recover it. 00:38:05.489 [2024-12-13 03:49:06.219664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.489 [2024-12-13 03:49:06.219678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.489 qpair failed and we were unable to recover it. 00:38:05.489 [2024-12-13 03:49:06.219770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.489 [2024-12-13 03:49:06.219784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.489 qpair failed and we were unable to recover it. 00:38:05.489 [2024-12-13 03:49:06.219879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.489 [2024-12-13 03:49:06.219893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.489 qpair failed and we were unable to recover it. 00:38:05.489 [2024-12-13 03:49:06.220042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.489 [2024-12-13 03:49:06.220056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.489 qpair failed and we were unable to recover it. 00:38:05.489 [2024-12-13 03:49:06.220131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.489 [2024-12-13 03:49:06.220145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.489 qpair failed and we were unable to recover it. 00:38:05.489 [2024-12-13 03:49:06.220230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.489 [2024-12-13 03:49:06.220244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.489 qpair failed and we were unable to recover it. 00:38:05.489 [2024-12-13 03:49:06.220326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.489 [2024-12-13 03:49:06.220340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.489 qpair failed and we were unable to recover it. 00:38:05.489 [2024-12-13 03:49:06.220485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.489 [2024-12-13 03:49:06.220500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.489 qpair failed and we were unable to recover it. 00:38:05.489 [2024-12-13 03:49:06.220644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.489 [2024-12-13 03:49:06.220667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.489 qpair failed and we were unable to recover it. 00:38:05.489 [2024-12-13 03:49:06.220739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.489 [2024-12-13 03:49:06.220753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.489 qpair failed and we were unable to recover it. 00:38:05.489 [2024-12-13 03:49:06.220827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.489 [2024-12-13 03:49:06.220840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.489 qpair failed and we were unable to recover it. 00:38:05.489 [2024-12-13 03:49:06.220936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.489 [2024-12-13 03:49:06.220952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.489 qpair failed and we were unable to recover it. 00:38:05.489 [2024-12-13 03:49:06.221091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.489 [2024-12-13 03:49:06.221105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.489 qpair failed and we were unable to recover it. 00:38:05.489 [2024-12-13 03:49:06.221182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.489 [2024-12-13 03:49:06.221197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.489 qpair failed and we were unable to recover it. 00:38:05.489 [2024-12-13 03:49:06.221276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.489 [2024-12-13 03:49:06.221290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.489 qpair failed and we were unable to recover it. 00:38:05.489 [2024-12-13 03:49:06.221438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.490 [2024-12-13 03:49:06.221453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.490 qpair failed and we were unable to recover it. 00:38:05.490 [2024-12-13 03:49:06.221518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.490 [2024-12-13 03:49:06.221531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.490 qpair failed and we were unable to recover it. 00:38:05.490 [2024-12-13 03:49:06.221676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.490 [2024-12-13 03:49:06.221691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.490 qpair failed and we were unable to recover it. 00:38:05.490 [2024-12-13 03:49:06.221830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.490 [2024-12-13 03:49:06.221844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.490 qpair failed and we were unable to recover it. 00:38:05.490 [2024-12-13 03:49:06.222000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.490 [2024-12-13 03:49:06.222022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.490 qpair failed and we were unable to recover it. 00:38:05.490 [2024-12-13 03:49:06.222108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.490 [2024-12-13 03:49:06.222122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.490 qpair failed and we were unable to recover it. 00:38:05.490 [2024-12-13 03:49:06.222191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.490 [2024-12-13 03:49:06.222205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.490 qpair failed and we were unable to recover it. 00:38:05.490 [2024-12-13 03:49:06.222346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.490 [2024-12-13 03:49:06.222360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.490 qpair failed and we were unable to recover it. 00:38:05.490 [2024-12-13 03:49:06.222435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.490 [2024-12-13 03:49:06.222450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.490 qpair failed and we were unable to recover it. 00:38:05.490 [2024-12-13 03:49:06.222608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.490 [2024-12-13 03:49:06.222622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.490 qpair failed and we were unable to recover it. 00:38:05.490 [2024-12-13 03:49:06.222772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.490 [2024-12-13 03:49:06.222786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.490 qpair failed and we were unable to recover it. 00:38:05.490 [2024-12-13 03:49:06.222928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.490 [2024-12-13 03:49:06.222942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.490 qpair failed and we were unable to recover it. 00:38:05.490 [2024-12-13 03:49:06.223084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.490 [2024-12-13 03:49:06.223098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.490 qpair failed and we were unable to recover it. 00:38:05.490 [2024-12-13 03:49:06.223285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.490 [2024-12-13 03:49:06.223302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.490 qpair failed and we were unable to recover it. 00:38:05.490 [2024-12-13 03:49:06.223375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.490 [2024-12-13 03:49:06.223390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.490 qpair failed and we were unable to recover it. 00:38:05.490 [2024-12-13 03:49:06.223639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.490 [2024-12-13 03:49:06.223654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.490 qpair failed and we were unable to recover it. 00:38:05.490 [2024-12-13 03:49:06.223727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.490 [2024-12-13 03:49:06.223742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.490 qpair failed and we were unable to recover it. 00:38:05.490 [2024-12-13 03:49:06.223878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.490 [2024-12-13 03:49:06.223892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.490 qpair failed and we were unable to recover it. 00:38:05.490 [2024-12-13 03:49:06.224046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.490 [2024-12-13 03:49:06.224061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.490 qpair failed and we were unable to recover it. 00:38:05.490 [2024-12-13 03:49:06.224156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.490 [2024-12-13 03:49:06.224171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.490 qpair failed and we were unable to recover it. 00:38:05.490 [2024-12-13 03:49:06.224334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.490 [2024-12-13 03:49:06.224350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.490 qpair failed and we were unable to recover it. 00:38:05.490 [2024-12-13 03:49:06.224444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.490 [2024-12-13 03:49:06.224459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.490 qpair failed and we were unable to recover it. 00:38:05.490 [2024-12-13 03:49:06.224540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.490 [2024-12-13 03:49:06.224556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.490 qpair failed and we were unable to recover it. 00:38:05.490 [2024-12-13 03:49:06.224646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.490 [2024-12-13 03:49:06.224661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.490 qpair failed and we were unable to recover it. 00:38:05.490 [2024-12-13 03:49:06.224755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.490 [2024-12-13 03:49:06.224772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.490 qpair failed and we were unable to recover it. 00:38:05.490 [2024-12-13 03:49:06.224855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.490 [2024-12-13 03:49:06.224870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.490 qpair failed and we were unable to recover it. 00:38:05.490 [2024-12-13 03:49:06.224947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.490 [2024-12-13 03:49:06.224961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.490 qpair failed and we were unable to recover it. 00:38:05.490 [2024-12-13 03:49:06.225044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.490 [2024-12-13 03:49:06.225060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.490 qpair failed and we were unable to recover it. 00:38:05.490 [2024-12-13 03:49:06.225290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.490 [2024-12-13 03:49:06.225306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.490 qpair failed and we were unable to recover it. 00:38:05.490 [2024-12-13 03:49:06.225377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.490 [2024-12-13 03:49:06.225392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.490 qpair failed and we were unable to recover it. 00:38:05.490 [2024-12-13 03:49:06.225540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.490 [2024-12-13 03:49:06.225556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.490 qpair failed and we were unable to recover it. 00:38:05.490 [2024-12-13 03:49:06.225728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.490 [2024-12-13 03:49:06.225743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.490 qpair failed and we were unable to recover it. 00:38:05.490 [2024-12-13 03:49:06.225813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.490 [2024-12-13 03:49:06.225829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.490 qpair failed and we were unable to recover it. 00:38:05.490 [2024-12-13 03:49:06.225902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.491 [2024-12-13 03:49:06.225933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.491 qpair failed and we were unable to recover it. 00:38:05.491 [2024-12-13 03:49:06.226025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.491 [2024-12-13 03:49:06.226041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.491 qpair failed and we were unable to recover it. 00:38:05.491 [2024-12-13 03:49:06.226131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.491 [2024-12-13 03:49:06.226146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.491 qpair failed and we were unable to recover it. 00:38:05.491 [2024-12-13 03:49:06.226217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.491 [2024-12-13 03:49:06.226232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.491 qpair failed and we were unable to recover it. 00:38:05.491 [2024-12-13 03:49:06.226362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.491 [2024-12-13 03:49:06.226377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.491 qpair failed and we were unable to recover it. 00:38:05.491 [2024-12-13 03:49:06.226532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.491 [2024-12-13 03:49:06.226547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.491 qpair failed and we were unable to recover it. 00:38:05.491 [2024-12-13 03:49:06.226618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.491 [2024-12-13 03:49:06.226633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.491 qpair failed and we were unable to recover it. 00:38:05.491 [2024-12-13 03:49:06.226707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.491 [2024-12-13 03:49:06.226722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.491 qpair failed and we were unable to recover it. 00:38:05.491 [2024-12-13 03:49:06.226929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.491 [2024-12-13 03:49:06.226945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.491 qpair failed and we were unable to recover it. 00:38:05.491 [2024-12-13 03:49:06.227015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.491 [2024-12-13 03:49:06.227029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.491 qpair failed and we were unable to recover it. 00:38:05.491 [2024-12-13 03:49:06.227113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.491 [2024-12-13 03:49:06.227129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.491 qpair failed and we were unable to recover it. 00:38:05.491 [2024-12-13 03:49:06.227390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.491 [2024-12-13 03:49:06.227406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.491 qpair failed and we were unable to recover it. 00:38:05.491 [2024-12-13 03:49:06.227608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.491 [2024-12-13 03:49:06.227624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.491 qpair failed and we were unable to recover it. 00:38:05.491 [2024-12-13 03:49:06.227859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.491 [2024-12-13 03:49:06.227873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.491 qpair failed and we were unable to recover it. 00:38:05.491 [2024-12-13 03:49:06.227956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.491 [2024-12-13 03:49:06.227971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.491 qpair failed and we were unable to recover it. 00:38:05.491 [2024-12-13 03:49:06.228056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.491 [2024-12-13 03:49:06.228071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.491 qpair failed and we were unable to recover it. 00:38:05.491 [2024-12-13 03:49:06.228155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.491 [2024-12-13 03:49:06.228170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.491 qpair failed and we were unable to recover it. 00:38:05.491 [2024-12-13 03:49:06.228317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.491 [2024-12-13 03:49:06.228332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.491 qpair failed and we were unable to recover it. 00:38:05.491 [2024-12-13 03:49:06.228427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.491 [2024-12-13 03:49:06.228442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.491 qpair failed and we were unable to recover it. 00:38:05.491 [2024-12-13 03:49:06.228599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.491 [2024-12-13 03:49:06.228614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.491 qpair failed and we were unable to recover it. 00:38:05.491 [2024-12-13 03:49:06.228696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.491 [2024-12-13 03:49:06.228711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.491 qpair failed and we were unable to recover it. 00:38:05.491 [2024-12-13 03:49:06.228804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.491 [2024-12-13 03:49:06.228819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.491 qpair failed and we were unable to recover it. 00:38:05.491 [2024-12-13 03:49:06.228969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.491 [2024-12-13 03:49:06.228985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.491 qpair failed and we were unable to recover it. 00:38:05.491 [2024-12-13 03:49:06.229089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.491 [2024-12-13 03:49:06.229105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.491 qpair failed and we were unable to recover it. 00:38:05.491 [2024-12-13 03:49:06.229257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.491 [2024-12-13 03:49:06.229278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.491 qpair failed and we were unable to recover it. 00:38:05.491 [2024-12-13 03:49:06.229372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.491 [2024-12-13 03:49:06.229387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.491 qpair failed and we were unable to recover it. 00:38:05.491 [2024-12-13 03:49:06.229471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.491 [2024-12-13 03:49:06.229485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.491 qpair failed and we were unable to recover it. 00:38:05.491 [2024-12-13 03:49:06.229643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.491 [2024-12-13 03:49:06.229658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.491 qpair failed and we were unable to recover it. 00:38:05.491 [2024-12-13 03:49:06.229813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.491 [2024-12-13 03:49:06.229828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.491 qpair failed and we were unable to recover it. 00:38:05.491 [2024-12-13 03:49:06.229972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.491 [2024-12-13 03:49:06.229987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.491 qpair failed and we were unable to recover it. 00:38:05.491 [2024-12-13 03:49:06.230125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.491 [2024-12-13 03:49:06.230140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.492 qpair failed and we were unable to recover it. 00:38:05.492 [2024-12-13 03:49:06.230232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.492 [2024-12-13 03:49:06.230249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.492 qpair failed and we were unable to recover it. 00:38:05.492 [2024-12-13 03:49:06.230331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.492 [2024-12-13 03:49:06.230347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.492 qpair failed and we were unable to recover it. 00:38:05.492 [2024-12-13 03:49:06.230506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.492 [2024-12-13 03:49:06.230521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.492 qpair failed and we were unable to recover it. 00:38:05.492 [2024-12-13 03:49:06.230666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.492 [2024-12-13 03:49:06.230680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.492 qpair failed and we were unable to recover it. 00:38:05.492 [2024-12-13 03:49:06.230757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.492 [2024-12-13 03:49:06.230771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.492 qpair failed and we were unable to recover it. 00:38:05.492 [2024-12-13 03:49:06.230885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.492 [2024-12-13 03:49:06.230899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.492 qpair failed and we were unable to recover it. 00:38:05.492 [2024-12-13 03:49:06.231089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.492 [2024-12-13 03:49:06.231106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.492 qpair failed and we were unable to recover it. 00:38:05.492 [2024-12-13 03:49:06.231211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.492 [2024-12-13 03:49:06.231227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.492 qpair failed and we were unable to recover it. 00:38:05.492 [2024-12-13 03:49:06.231424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.492 [2024-12-13 03:49:06.231439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.492 qpair failed and we were unable to recover it. 00:38:05.492 [2024-12-13 03:49:06.231613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.492 [2024-12-13 03:49:06.231628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.492 qpair failed and we were unable to recover it. 00:38:05.492 [2024-12-13 03:49:06.231780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.492 [2024-12-13 03:49:06.231795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.492 qpair failed and we were unable to recover it. 00:38:05.492 [2024-12-13 03:49:06.231944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.492 [2024-12-13 03:49:06.231971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.492 qpair failed and we were unable to recover it. 00:38:05.492 [2024-12-13 03:49:06.232064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.492 [2024-12-13 03:49:06.232079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.492 qpair failed and we were unable to recover it. 00:38:05.492 [2024-12-13 03:49:06.232305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.492 [2024-12-13 03:49:06.232336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.492 qpair failed and we were unable to recover it. 00:38:05.492 [2024-12-13 03:49:06.232425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.492 [2024-12-13 03:49:06.232440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.492 qpair failed and we were unable to recover it. 00:38:05.492 [2024-12-13 03:49:06.232592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.492 [2024-12-13 03:49:06.232607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.492 qpair failed and we were unable to recover it. 00:38:05.492 [2024-12-13 03:49:06.232702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.492 [2024-12-13 03:49:06.232717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.492 qpair failed and we were unable to recover it. 00:38:05.492 [2024-12-13 03:49:06.232862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.492 [2024-12-13 03:49:06.232878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.492 qpair failed and we were unable to recover it. 00:38:05.492 [2024-12-13 03:49:06.232973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.492 [2024-12-13 03:49:06.232988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.492 qpair failed and we were unable to recover it. 00:38:05.492 [2024-12-13 03:49:06.233141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.492 [2024-12-13 03:49:06.233156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.492 qpair failed and we were unable to recover it. 00:38:05.492 [2024-12-13 03:49:06.233323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.492 [2024-12-13 03:49:06.233338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.492 qpair failed and we were unable to recover it. 00:38:05.492 [2024-12-13 03:49:06.233544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.492 [2024-12-13 03:49:06.233559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.492 qpair failed and we were unable to recover it. 00:38:05.492 [2024-12-13 03:49:06.233716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.492 [2024-12-13 03:49:06.233731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.492 qpair failed and we were unable to recover it. 00:38:05.492 [2024-12-13 03:49:06.233822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.492 [2024-12-13 03:49:06.233837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.492 qpair failed and we were unable to recover it. 00:38:05.492 [2024-12-13 03:49:06.233931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.492 [2024-12-13 03:49:06.233947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.492 qpair failed and we were unable to recover it. 00:38:05.492 [2024-12-13 03:49:06.234036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.492 [2024-12-13 03:49:06.234061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.492 qpair failed and we were unable to recover it. 00:38:05.492 [2024-12-13 03:49:06.234164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.492 [2024-12-13 03:49:06.234179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.492 qpair failed and we were unable to recover it. 00:38:05.492 [2024-12-13 03:49:06.234384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.492 [2024-12-13 03:49:06.234399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.492 qpair failed and we were unable to recover it. 00:38:05.492 [2024-12-13 03:49:06.234484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.492 [2024-12-13 03:49:06.234499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.492 qpair failed and we were unable to recover it. 00:38:05.492 [2024-12-13 03:49:06.234708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.492 [2024-12-13 03:49:06.234724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.492 qpair failed and we were unable to recover it. 00:38:05.492 [2024-12-13 03:49:06.234892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.492 [2024-12-13 03:49:06.234907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.492 qpair failed and we were unable to recover it. 00:38:05.492 [2024-12-13 03:49:06.235141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.492 [2024-12-13 03:49:06.235156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.492 qpair failed and we were unable to recover it. 00:38:05.492 [2024-12-13 03:49:06.235371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.492 [2024-12-13 03:49:06.235385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.492 qpair failed and we were unable to recover it. 00:38:05.492 [2024-12-13 03:49:06.235547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.492 [2024-12-13 03:49:06.235561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.492 qpair failed and we were unable to recover it. 00:38:05.492 [2024-12-13 03:49:06.235738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.493 [2024-12-13 03:49:06.235752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.493 qpair failed and we were unable to recover it. 00:38:05.493 [2024-12-13 03:49:06.235842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.493 [2024-12-13 03:49:06.235856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.493 qpair failed and we were unable to recover it. 00:38:05.493 [2024-12-13 03:49:06.236060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.493 [2024-12-13 03:49:06.236075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.493 qpair failed and we were unable to recover it. 00:38:05.493 [2024-12-13 03:49:06.236221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.493 [2024-12-13 03:49:06.236236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.493 qpair failed and we were unable to recover it. 00:38:05.493 [2024-12-13 03:49:06.236374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.493 [2024-12-13 03:49:06.236388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.493 qpair failed and we were unable to recover it. 00:38:05.493 [2024-12-13 03:49:06.236482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.493 [2024-12-13 03:49:06.236496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.493 qpair failed and we were unable to recover it. 00:38:05.493 [2024-12-13 03:49:06.236636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.493 [2024-12-13 03:49:06.236653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.493 qpair failed and we were unable to recover it. 00:38:05.493 [2024-12-13 03:49:06.236806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.493 [2024-12-13 03:49:06.236820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.493 qpair failed and we were unable to recover it. 00:38:05.493 [2024-12-13 03:49:06.236970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.493 [2024-12-13 03:49:06.236983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.493 qpair failed and we were unable to recover it. 00:38:05.493 [2024-12-13 03:49:06.237127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.493 [2024-12-13 03:49:06.237141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.493 qpair failed and we were unable to recover it. 00:38:05.493 [2024-12-13 03:49:06.237279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.493 [2024-12-13 03:49:06.237293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.493 qpair failed and we were unable to recover it. 00:38:05.493 [2024-12-13 03:49:06.237518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.493 [2024-12-13 03:49:06.237533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.493 qpair failed and we were unable to recover it. 00:38:05.493 [2024-12-13 03:49:06.237703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.493 [2024-12-13 03:49:06.237716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.493 qpair failed and we were unable to recover it. 00:38:05.493 [2024-12-13 03:49:06.237882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.493 [2024-12-13 03:49:06.237933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.493 qpair failed and we were unable to recover it. 00:38:05.493 [2024-12-13 03:49:06.238184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.493 [2024-12-13 03:49:06.238227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.493 qpair failed and we were unable to recover it. 00:38:05.493 [2024-12-13 03:49:06.238485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.493 [2024-12-13 03:49:06.238498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.493 qpair failed and we were unable to recover it. 00:38:05.493 [2024-12-13 03:49:06.238598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.493 [2024-12-13 03:49:06.238612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.493 qpair failed and we were unable to recover it. 00:38:05.493 [2024-12-13 03:49:06.238812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.493 [2024-12-13 03:49:06.238827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.493 qpair failed and we were unable to recover it. 00:38:05.493 [2024-12-13 03:49:06.238968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.493 [2024-12-13 03:49:06.238982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.493 qpair failed and we were unable to recover it. 00:38:05.493 [2024-12-13 03:49:06.239229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.493 [2024-12-13 03:49:06.239244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.493 qpair failed and we were unable to recover it. 00:38:05.493 [2024-12-13 03:49:06.239393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.493 [2024-12-13 03:49:06.239406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.493 qpair failed and we were unable to recover it. 00:38:05.493 [2024-12-13 03:49:06.239573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.493 [2024-12-13 03:49:06.239616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.493 qpair failed and we were unable to recover it. 00:38:05.493 [2024-12-13 03:49:06.239822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.493 [2024-12-13 03:49:06.239867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.493 qpair failed and we were unable to recover it. 00:38:05.493 [2024-12-13 03:49:06.240072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.493 [2024-12-13 03:49:06.240129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.493 qpair failed and we were unable to recover it. 00:38:05.493 [2024-12-13 03:49:06.240387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.493 [2024-12-13 03:49:06.240429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.493 qpair failed and we were unable to recover it. 00:38:05.493 [2024-12-13 03:49:06.240617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.493 [2024-12-13 03:49:06.240659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.493 qpair failed and we were unable to recover it. 00:38:05.493 [2024-12-13 03:49:06.240820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.493 [2024-12-13 03:49:06.240834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.493 qpair failed and we were unable to recover it. 00:38:05.493 [2024-12-13 03:49:06.241005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.493 [2024-12-13 03:49:06.241020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.493 qpair failed and we were unable to recover it. 00:38:05.493 [2024-12-13 03:49:06.241224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.493 [2024-12-13 03:49:06.241238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.493 qpair failed and we were unable to recover it. 00:38:05.493 [2024-12-13 03:49:06.241304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.493 [2024-12-13 03:49:06.241317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.493 qpair failed and we were unable to recover it. 00:38:05.493 [2024-12-13 03:49:06.241533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.493 [2024-12-13 03:49:06.241546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.493 qpair failed and we were unable to recover it. 00:38:05.493 [2024-12-13 03:49:06.241773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.493 [2024-12-13 03:49:06.241821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.493 qpair failed and we were unable to recover it. 00:38:05.493 [2024-12-13 03:49:06.241977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.494 [2024-12-13 03:49:06.242020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.494 qpair failed and we were unable to recover it. 00:38:05.494 [2024-12-13 03:49:06.242228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.494 [2024-12-13 03:49:06.242270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.494 qpair failed and we were unable to recover it. 00:38:05.494 [2024-12-13 03:49:06.242559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.494 [2024-12-13 03:49:06.242573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.494 qpair failed and we were unable to recover it. 00:38:05.494 [2024-12-13 03:49:06.242795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.494 [2024-12-13 03:49:06.242809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.494 qpair failed and we were unable to recover it. 00:38:05.494 [2024-12-13 03:49:06.242904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.494 [2024-12-13 03:49:06.242921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.494 qpair failed and we were unable to recover it. 00:38:05.494 [2024-12-13 03:49:06.243072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.494 [2024-12-13 03:49:06.243085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.494 qpair failed and we were unable to recover it. 00:38:05.494 [2024-12-13 03:49:06.243179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.494 [2024-12-13 03:49:06.243191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.494 qpair failed and we were unable to recover it. 00:38:05.494 [2024-12-13 03:49:06.243281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.494 [2024-12-13 03:49:06.243293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.494 qpair failed and we were unable to recover it. 00:38:05.494 [2024-12-13 03:49:06.243364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.494 [2024-12-13 03:49:06.243376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.494 qpair failed and we were unable to recover it. 00:38:05.494 [2024-12-13 03:49:06.243464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.494 [2024-12-13 03:49:06.243477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.494 qpair failed and we were unable to recover it. 00:38:05.494 [2024-12-13 03:49:06.243610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.494 [2024-12-13 03:49:06.243623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.494 qpair failed and we were unable to recover it. 00:38:05.494 [2024-12-13 03:49:06.243700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.494 [2024-12-13 03:49:06.243713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.494 qpair failed and we were unable to recover it. 00:38:05.494 [2024-12-13 03:49:06.243925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.494 [2024-12-13 03:49:06.243939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.494 qpair failed and we were unable to recover it. 00:38:05.494 [2024-12-13 03:49:06.244048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.494 [2024-12-13 03:49:06.244064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.494 qpair failed and we were unable to recover it. 00:38:05.494 [2024-12-13 03:49:06.244218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.494 [2024-12-13 03:49:06.244234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.494 qpair failed and we were unable to recover it. 00:38:05.494 [2024-12-13 03:49:06.244327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.494 [2024-12-13 03:49:06.244340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.494 qpair failed and we were unable to recover it. 00:38:05.494 [2024-12-13 03:49:06.244542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.494 [2024-12-13 03:49:06.244556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.494 qpair failed and we were unable to recover it. 00:38:05.494 [2024-12-13 03:49:06.244690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.494 [2024-12-13 03:49:06.244702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.494 qpair failed and we were unable to recover it. 00:38:05.494 [2024-12-13 03:49:06.244862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.494 [2024-12-13 03:49:06.244875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.494 qpair failed and we were unable to recover it. 00:38:05.494 [2024-12-13 03:49:06.245020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.494 [2024-12-13 03:49:06.245035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.494 qpair failed and we were unable to recover it. 00:38:05.494 [2024-12-13 03:49:06.245262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.494 [2024-12-13 03:49:06.245276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.494 qpair failed and we were unable to recover it. 00:38:05.494 [2024-12-13 03:49:06.245357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.494 [2024-12-13 03:49:06.245369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.494 qpair failed and we were unable to recover it. 00:38:05.494 [2024-12-13 03:49:06.245507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.494 [2024-12-13 03:49:06.245520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.494 qpair failed and we were unable to recover it. 00:38:05.494 [2024-12-13 03:49:06.245658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.494 [2024-12-13 03:49:06.245672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.494 qpair failed and we were unable to recover it. 00:38:05.494 [2024-12-13 03:49:06.245816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.494 [2024-12-13 03:49:06.245829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.494 qpair failed and we were unable to recover it. 00:38:05.494 [2024-12-13 03:49:06.246048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.494 [2024-12-13 03:49:06.246062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.494 qpair failed and we were unable to recover it. 00:38:05.494 [2024-12-13 03:49:06.246213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.494 [2024-12-13 03:49:06.246227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.494 qpair failed and we were unable to recover it. 00:38:05.494 [2024-12-13 03:49:06.246385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.494 [2024-12-13 03:49:06.246399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.494 qpair failed and we were unable to recover it. 00:38:05.494 [2024-12-13 03:49:06.246544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.494 [2024-12-13 03:49:06.246557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.494 qpair failed and we were unable to recover it. 00:38:05.494 [2024-12-13 03:49:06.246689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.494 [2024-12-13 03:49:06.246702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.494 qpair failed and we were unable to recover it. 00:38:05.494 [2024-12-13 03:49:06.246908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.494 [2024-12-13 03:49:06.246925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.494 qpair failed and we were unable to recover it. 00:38:05.494 [2024-12-13 03:49:06.247087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.494 [2024-12-13 03:49:06.247100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.494 qpair failed and we were unable to recover it. 00:38:05.494 [2024-12-13 03:49:06.247263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.494 [2024-12-13 03:49:06.247303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.494 qpair failed and we were unable to recover it. 00:38:05.494 [2024-12-13 03:49:06.247445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.494 [2024-12-13 03:49:06.247485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.494 qpair failed and we were unable to recover it. 00:38:05.494 [2024-12-13 03:49:06.247683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.494 [2024-12-13 03:49:06.247724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.495 qpair failed and we were unable to recover it. 00:38:05.495 [2024-12-13 03:49:06.247861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.495 [2024-12-13 03:49:06.247874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.495 qpair failed and we were unable to recover it. 00:38:05.495 [2024-12-13 03:49:06.248018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.495 [2024-12-13 03:49:06.248032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.495 qpair failed and we were unable to recover it. 00:38:05.495 [2024-12-13 03:49:06.248237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.495 [2024-12-13 03:49:06.248251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.495 qpair failed and we were unable to recover it. 00:38:05.495 [2024-12-13 03:49:06.248428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.495 [2024-12-13 03:49:06.248442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.495 qpair failed and we were unable to recover it. 00:38:05.495 [2024-12-13 03:49:06.248575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.495 [2024-12-13 03:49:06.248589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.495 qpair failed and we were unable to recover it. 00:38:05.495 [2024-12-13 03:49:06.248740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.495 [2024-12-13 03:49:06.248752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.495 qpair failed and we were unable to recover it. 00:38:05.495 [2024-12-13 03:49:06.248860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.495 [2024-12-13 03:49:06.248874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.495 qpair failed and we were unable to recover it. 00:38:05.495 [2024-12-13 03:49:06.249022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.495 [2024-12-13 03:49:06.249035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.495 qpair failed and we were unable to recover it. 00:38:05.495 [2024-12-13 03:49:06.249239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.495 [2024-12-13 03:49:06.249253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.495 qpair failed and we were unable to recover it. 00:38:05.495 [2024-12-13 03:49:06.249407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.495 [2024-12-13 03:49:06.249421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.495 qpair failed and we were unable to recover it. 00:38:05.495 [2024-12-13 03:49:06.249658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.495 [2024-12-13 03:49:06.249672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.495 qpair failed and we were unable to recover it. 00:38:05.495 [2024-12-13 03:49:06.249833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.495 [2024-12-13 03:49:06.249846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.495 qpair failed and we were unable to recover it. 00:38:05.495 [2024-12-13 03:49:06.249985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.495 [2024-12-13 03:49:06.249998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.495 qpair failed and we were unable to recover it. 00:38:05.495 [2024-12-13 03:49:06.250147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.495 [2024-12-13 03:49:06.250161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.495 qpair failed and we were unable to recover it. 00:38:05.495 [2024-12-13 03:49:06.250295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.495 [2024-12-13 03:49:06.250307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.495 qpair failed and we were unable to recover it. 00:38:05.495 [2024-12-13 03:49:06.250515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.495 [2024-12-13 03:49:06.250529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.495 qpair failed and we were unable to recover it. 00:38:05.495 [2024-12-13 03:49:06.250616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.495 [2024-12-13 03:49:06.250629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.495 qpair failed and we were unable to recover it. 00:38:05.495 [2024-12-13 03:49:06.250858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.495 [2024-12-13 03:49:06.250871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.495 qpair failed and we were unable to recover it. 00:38:05.495 [2024-12-13 03:49:06.250958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.495 [2024-12-13 03:49:06.250978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.495 qpair failed and we were unable to recover it. 00:38:05.495 [2024-12-13 03:49:06.251071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.495 [2024-12-13 03:49:06.251087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.495 qpair failed and we were unable to recover it. 00:38:05.495 [2024-12-13 03:49:06.251183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.495 [2024-12-13 03:49:06.251201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.495 qpair failed and we were unable to recover it. 00:38:05.495 [2024-12-13 03:49:06.251340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.495 [2024-12-13 03:49:06.251353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.495 qpair failed and we were unable to recover it. 00:38:05.495 [2024-12-13 03:49:06.251434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.495 [2024-12-13 03:49:06.251446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.495 qpair failed and we were unable to recover it. 00:38:05.495 [2024-12-13 03:49:06.251537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.495 [2024-12-13 03:49:06.251549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.495 qpair failed and we were unable to recover it. 00:38:05.495 [2024-12-13 03:49:06.251690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.495 [2024-12-13 03:49:06.251704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.495 qpair failed and we were unable to recover it. 00:38:05.495 [2024-12-13 03:49:06.251840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.495 [2024-12-13 03:49:06.251853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.495 qpair failed and we were unable to recover it. 00:38:05.495 [2024-12-13 03:49:06.251937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.495 [2024-12-13 03:49:06.251950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.495 qpair failed and we were unable to recover it. 00:38:05.495 [2024-12-13 03:49:06.252099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.495 [2024-12-13 03:49:06.252112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.495 qpair failed and we were unable to recover it. 00:38:05.495 [2024-12-13 03:49:06.252263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.495 [2024-12-13 03:49:06.252276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.495 qpair failed and we were unable to recover it. 00:38:05.495 [2024-12-13 03:49:06.252421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.495 [2024-12-13 03:49:06.252434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.495 qpair failed and we were unable to recover it. 00:38:05.495 [2024-12-13 03:49:06.252513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.495 [2024-12-13 03:49:06.252526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.495 qpair failed and we were unable to recover it. 00:38:05.495 [2024-12-13 03:49:06.252595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.495 [2024-12-13 03:49:06.252607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.495 qpair failed and we were unable to recover it. 00:38:05.495 [2024-12-13 03:49:06.252685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.495 [2024-12-13 03:49:06.252697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.496 qpair failed and we were unable to recover it. 00:38:05.496 [2024-12-13 03:49:06.252833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.496 [2024-12-13 03:49:06.252847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.496 qpair failed and we were unable to recover it. 00:38:05.496 [2024-12-13 03:49:06.253049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.496 [2024-12-13 03:49:06.253062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.496 qpair failed and we were unable to recover it. 00:38:05.496 [2024-12-13 03:49:06.253153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.496 [2024-12-13 03:49:06.253165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.496 qpair failed and we were unable to recover it. 00:38:05.496 [2024-12-13 03:49:06.253314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.496 [2024-12-13 03:49:06.253327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.496 qpair failed and we were unable to recover it. 00:38:05.496 [2024-12-13 03:49:06.253524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.496 [2024-12-13 03:49:06.253537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.496 qpair failed and we were unable to recover it. 00:38:05.496 [2024-12-13 03:49:06.253681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.496 [2024-12-13 03:49:06.253694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.496 qpair failed and we were unable to recover it. 00:38:05.496 [2024-12-13 03:49:06.253787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.496 [2024-12-13 03:49:06.253802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.496 qpair failed and we were unable to recover it. 00:38:05.496 [2024-12-13 03:49:06.253893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.496 [2024-12-13 03:49:06.253906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.496 qpair failed and we were unable to recover it. 00:38:05.496 [2024-12-13 03:49:06.254137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.496 [2024-12-13 03:49:06.254151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.496 qpair failed and we were unable to recover it. 00:38:05.496 [2024-12-13 03:49:06.254314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.496 [2024-12-13 03:49:06.254327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.496 qpair failed and we were unable to recover it. 00:38:05.496 [2024-12-13 03:49:06.254416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.496 [2024-12-13 03:49:06.254428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.496 qpair failed and we were unable to recover it. 00:38:05.496 [2024-12-13 03:49:06.254585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.496 [2024-12-13 03:49:06.254598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.496 qpair failed and we were unable to recover it. 00:38:05.496 [2024-12-13 03:49:06.254731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.496 [2024-12-13 03:49:06.254744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.496 qpair failed and we were unable to recover it. 00:38:05.496 [2024-12-13 03:49:06.254925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.496 [2024-12-13 03:49:06.254940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.496 qpair failed and we were unable to recover it. 00:38:05.496 [2024-12-13 03:49:06.255027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.496 [2024-12-13 03:49:06.255044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.496 qpair failed and we were unable to recover it. 00:38:05.496 [2024-12-13 03:49:06.255266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.496 [2024-12-13 03:49:06.255279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.496 qpair failed and we were unable to recover it. 00:38:05.496 [2024-12-13 03:49:06.255432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.496 [2024-12-13 03:49:06.255445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.496 qpair failed and we were unable to recover it. 00:38:05.496 [2024-12-13 03:49:06.255659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.496 [2024-12-13 03:49:06.255672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.496 qpair failed and we were unable to recover it. 00:38:05.496 [2024-12-13 03:49:06.255832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.496 [2024-12-13 03:49:06.255846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.496 qpair failed and we were unable to recover it. 00:38:05.496 [2024-12-13 03:49:06.255931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.496 [2024-12-13 03:49:06.255943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.496 qpair failed and we were unable to recover it. 00:38:05.496 [2024-12-13 03:49:06.256086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.496 [2024-12-13 03:49:06.256100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.496 qpair failed and we were unable to recover it. 00:38:05.496 [2024-12-13 03:49:06.256349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.496 [2024-12-13 03:49:06.256390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.496 qpair failed and we were unable to recover it. 00:38:05.496 [2024-12-13 03:49:06.256593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.496 [2024-12-13 03:49:06.256635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.496 qpair failed and we were unable to recover it. 00:38:05.496 [2024-12-13 03:49:06.256832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.496 [2024-12-13 03:49:06.256875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.496 qpair failed and we were unable to recover it. 00:38:05.496 [2024-12-13 03:49:06.257058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.496 [2024-12-13 03:49:06.257101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.496 qpair failed and we were unable to recover it. 00:38:05.496 [2024-12-13 03:49:06.257379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.496 [2024-12-13 03:49:06.257422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.496 qpair failed and we were unable to recover it. 00:38:05.496 [2024-12-13 03:49:06.257619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.496 [2024-12-13 03:49:06.257667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.496 qpair failed and we were unable to recover it. 00:38:05.496 [2024-12-13 03:49:06.257876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.496 [2024-12-13 03:49:06.257928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.496 qpair failed and we were unable to recover it. 00:38:05.496 [2024-12-13 03:49:06.258056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.497 [2024-12-13 03:49:06.258098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.497 qpair failed and we were unable to recover it. 00:38:05.497 [2024-12-13 03:49:06.258408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.497 [2024-12-13 03:49:06.258450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.497 qpair failed and we were unable to recover it. 00:38:05.497 [2024-12-13 03:49:06.258751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.497 [2024-12-13 03:49:06.258764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.497 qpair failed and we were unable to recover it. 00:38:05.497 [2024-12-13 03:49:06.258988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.497 [2024-12-13 03:49:06.259002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.497 qpair failed and we were unable to recover it. 00:38:05.497 [2024-12-13 03:49:06.259087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.497 [2024-12-13 03:49:06.259099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.497 qpair failed and we were unable to recover it. 00:38:05.497 [2024-12-13 03:49:06.259190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.497 [2024-12-13 03:49:06.259203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.497 qpair failed and we were unable to recover it. 00:38:05.497 [2024-12-13 03:49:06.259356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.497 [2024-12-13 03:49:06.259369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.497 qpair failed and we were unable to recover it. 00:38:05.497 [2024-12-13 03:49:06.259585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.497 [2024-12-13 03:49:06.259598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.497 qpair failed and we were unable to recover it. 00:38:05.497 [2024-12-13 03:49:06.259732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.497 [2024-12-13 03:49:06.259745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.497 qpair failed and we were unable to recover it. 00:38:05.497 [2024-12-13 03:49:06.259922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.497 [2024-12-13 03:49:06.259935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.497 qpair failed and we were unable to recover it. 00:38:05.497 [2024-12-13 03:49:06.260083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.497 [2024-12-13 03:49:06.260097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.497 qpair failed and we were unable to recover it. 00:38:05.497 [2024-12-13 03:49:06.260239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.497 [2024-12-13 03:49:06.260253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.497 qpair failed and we were unable to recover it. 00:38:05.497 [2024-12-13 03:49:06.260469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.497 [2024-12-13 03:49:06.260511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.497 qpair failed and we were unable to recover it. 00:38:05.497 [2024-12-13 03:49:06.260731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.497 [2024-12-13 03:49:06.260772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.497 qpair failed and we were unable to recover it. 00:38:05.497 [2024-12-13 03:49:06.260910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.497 [2024-12-13 03:49:06.260965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.497 qpair failed and we were unable to recover it. 00:38:05.497 [2024-12-13 03:49:06.261108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.497 [2024-12-13 03:49:06.261148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.497 qpair failed and we were unable to recover it. 00:38:05.497 [2024-12-13 03:49:06.261360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.497 [2024-12-13 03:49:06.261403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.497 qpair failed and we were unable to recover it. 00:38:05.497 [2024-12-13 03:49:06.261600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.497 [2024-12-13 03:49:06.261648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.497 qpair failed and we were unable to recover it. 00:38:05.497 [2024-12-13 03:49:06.261737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.497 [2024-12-13 03:49:06.261750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.497 qpair failed and we were unable to recover it. 00:38:05.497 [2024-12-13 03:49:06.261968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.497 [2024-12-13 03:49:06.261982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.497 qpair failed and we were unable to recover it. 00:38:05.497 [2024-12-13 03:49:06.262131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.497 [2024-12-13 03:49:06.262144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.497 qpair failed and we were unable to recover it. 00:38:05.497 [2024-12-13 03:49:06.262212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.497 [2024-12-13 03:49:06.262225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.497 qpair failed and we were unable to recover it. 00:38:05.497 [2024-12-13 03:49:06.262372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.497 [2024-12-13 03:49:06.262385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.497 qpair failed and we were unable to recover it. 00:38:05.497 [2024-12-13 03:49:06.262518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.497 [2024-12-13 03:49:06.262535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.497 qpair failed and we were unable to recover it. 00:38:05.497 [2024-12-13 03:49:06.262602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.497 [2024-12-13 03:49:06.262614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.497 qpair failed and we were unable to recover it. 00:38:05.497 [2024-12-13 03:49:06.262719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.497 [2024-12-13 03:49:06.262732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.497 qpair failed and we were unable to recover it. 00:38:05.497 [2024-12-13 03:49:06.262882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.497 [2024-12-13 03:49:06.262895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.497 qpair failed and we were unable to recover it. 00:38:05.497 [2024-12-13 03:49:06.262989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.497 [2024-12-13 03:49:06.263002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.497 qpair failed and we were unable to recover it. 00:38:05.497 [2024-12-13 03:49:06.263068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.497 [2024-12-13 03:49:06.263080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.497 qpair failed and we were unable to recover it. 00:38:05.497 [2024-12-13 03:49:06.263213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.497 [2024-12-13 03:49:06.263227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.497 qpair failed and we were unable to recover it. 00:38:05.497 [2024-12-13 03:49:06.263311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.497 [2024-12-13 03:49:06.263323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.497 qpair failed and we were unable to recover it. 00:38:05.497 [2024-12-13 03:49:06.263458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.497 [2024-12-13 03:49:06.263470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.497 qpair failed and we were unable to recover it. 00:38:05.497 [2024-12-13 03:49:06.263548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.497 [2024-12-13 03:49:06.263560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.497 qpair failed and we were unable to recover it. 00:38:05.497 [2024-12-13 03:49:06.263635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.497 [2024-12-13 03:49:06.263647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.497 qpair failed and we were unable to recover it. 00:38:05.497 [2024-12-13 03:49:06.263855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.497 [2024-12-13 03:49:06.263897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.497 qpair failed and we were unable to recover it. 00:38:05.498 [2024-12-13 03:49:06.264059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.498 [2024-12-13 03:49:06.264102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.498 qpair failed and we were unable to recover it. 00:38:05.498 [2024-12-13 03:49:06.264304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.498 [2024-12-13 03:49:06.264344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.498 qpair failed and we were unable to recover it. 00:38:05.498 [2024-12-13 03:49:06.264478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.498 [2024-12-13 03:49:06.264518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.498 qpair failed and we were unable to recover it. 00:38:05.498 [2024-12-13 03:49:06.264775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.498 [2024-12-13 03:49:06.264821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.498 qpair failed and we were unable to recover it. 00:38:05.498 [2024-12-13 03:49:06.264958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.498 [2024-12-13 03:49:06.265001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.498 qpair failed and we were unable to recover it. 00:38:05.498 [2024-12-13 03:49:06.265193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.498 [2024-12-13 03:49:06.265234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.498 qpair failed and we were unable to recover it. 00:38:05.498 [2024-12-13 03:49:06.265384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.498 [2024-12-13 03:49:06.265397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.498 qpair failed and we were unable to recover it. 00:38:05.498 [2024-12-13 03:49:06.265467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.498 [2024-12-13 03:49:06.265479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.498 qpair failed and we were unable to recover it. 00:38:05.498 [2024-12-13 03:49:06.265610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.498 [2024-12-13 03:49:06.265623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.498 qpair failed and we were unable to recover it. 00:38:05.498 [2024-12-13 03:49:06.265851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.498 [2024-12-13 03:49:06.265891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.498 qpair failed and we were unable to recover it. 00:38:05.498 [2024-12-13 03:49:06.266064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.498 [2024-12-13 03:49:06.266104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.498 qpair failed and we were unable to recover it. 00:38:05.498 [2024-12-13 03:49:06.266254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.498 [2024-12-13 03:49:06.266296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.498 qpair failed and we were unable to recover it. 00:38:05.498 [2024-12-13 03:49:06.266444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.498 [2024-12-13 03:49:06.266485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.498 qpair failed and we were unable to recover it. 00:38:05.498 [2024-12-13 03:49:06.266607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.498 [2024-12-13 03:49:06.266648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.498 qpair failed and we were unable to recover it. 00:38:05.498 [2024-12-13 03:49:06.266857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.498 [2024-12-13 03:49:06.266897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.498 qpair failed and we were unable to recover it. 00:38:05.498 [2024-12-13 03:49:06.267114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.498 [2024-12-13 03:49:06.267156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.498 qpair failed and we were unable to recover it. 00:38:05.498 [2024-12-13 03:49:06.267301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.498 [2024-12-13 03:49:06.267341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.498 qpair failed and we were unable to recover it. 00:38:05.498 [2024-12-13 03:49:06.267549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.498 [2024-12-13 03:49:06.267590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.498 qpair failed and we were unable to recover it. 00:38:05.498 [2024-12-13 03:49:06.267872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.498 [2024-12-13 03:49:06.267914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.498 qpair failed and we were unable to recover it. 00:38:05.498 [2024-12-13 03:49:06.268051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.498 [2024-12-13 03:49:06.268094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.498 qpair failed and we were unable to recover it. 00:38:05.498 [2024-12-13 03:49:06.268304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.498 [2024-12-13 03:49:06.268346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.498 qpair failed and we were unable to recover it. 00:38:05.498 [2024-12-13 03:49:06.268558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.498 [2024-12-13 03:49:06.268599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.498 qpair failed and we were unable to recover it. 00:38:05.498 [2024-12-13 03:49:06.268724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.498 [2024-12-13 03:49:06.268736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.498 qpair failed and we were unable to recover it. 00:38:05.498 [2024-12-13 03:49:06.268818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.498 [2024-12-13 03:49:06.268832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.498 qpair failed and we were unable to recover it. 00:38:05.498 [2024-12-13 03:49:06.268926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.498 [2024-12-13 03:49:06.268941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.498 qpair failed and we were unable to recover it. 00:38:05.498 [2024-12-13 03:49:06.269077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.498 [2024-12-13 03:49:06.269090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.498 qpair failed and we were unable to recover it. 00:38:05.498 [2024-12-13 03:49:06.269164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.498 [2024-12-13 03:49:06.269176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.498 qpair failed and we were unable to recover it. 00:38:05.498 [2024-12-13 03:49:06.269269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.498 [2024-12-13 03:49:06.269282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.498 qpair failed and we were unable to recover it. 00:38:05.498 [2024-12-13 03:49:06.269381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.498 [2024-12-13 03:49:06.269395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.498 qpair failed and we were unable to recover it. 00:38:05.498 [2024-12-13 03:49:06.269540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.498 [2024-12-13 03:49:06.269554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.498 qpair failed and we were unable to recover it. 00:38:05.498 [2024-12-13 03:49:06.269683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.498 [2024-12-13 03:49:06.269727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.498 qpair failed and we were unable to recover it. 00:38:05.498 [2024-12-13 03:49:06.269937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.498 [2024-12-13 03:49:06.269982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.498 qpair failed and we were unable to recover it. 00:38:05.498 [2024-12-13 03:49:06.270119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.498 [2024-12-13 03:49:06.270165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.498 qpair failed and we were unable to recover it. 00:38:05.498 [2024-12-13 03:49:06.270258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.499 [2024-12-13 03:49:06.270275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.499 qpair failed and we were unable to recover it. 00:38:05.499 [2024-12-13 03:49:06.270413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.499 [2024-12-13 03:49:06.270426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.499 qpair failed and we were unable to recover it. 00:38:05.499 [2024-12-13 03:49:06.270627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.499 [2024-12-13 03:49:06.270640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.499 qpair failed and we were unable to recover it. 00:38:05.499 [2024-12-13 03:49:06.270707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.499 [2024-12-13 03:49:06.270719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.499 qpair failed and we were unable to recover it. 00:38:05.499 [2024-12-13 03:49:06.270885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.499 [2024-12-13 03:49:06.270898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.499 qpair failed and we were unable to recover it. 00:38:05.499 [2024-12-13 03:49:06.271088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.499 [2024-12-13 03:49:06.271132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.499 qpair failed and we were unable to recover it. 00:38:05.499 [2024-12-13 03:49:06.271339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.499 [2024-12-13 03:49:06.271381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.499 qpair failed and we were unable to recover it. 00:38:05.499 [2024-12-13 03:49:06.271581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.499 [2024-12-13 03:49:06.271622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.499 qpair failed and we were unable to recover it. 00:38:05.499 [2024-12-13 03:49:06.271867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.499 [2024-12-13 03:49:06.271880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.499 qpair failed and we were unable to recover it. 00:38:05.499 [2024-12-13 03:49:06.272041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.499 [2024-12-13 03:49:06.272055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.499 qpair failed and we were unable to recover it. 00:38:05.499 [2024-12-13 03:49:06.272267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.499 [2024-12-13 03:49:06.272282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.499 qpair failed and we were unable to recover it. 00:38:05.499 [2024-12-13 03:49:06.272435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.499 [2024-12-13 03:49:06.272447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.499 qpair failed and we were unable to recover it. 00:38:05.499 [2024-12-13 03:49:06.272527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.499 [2024-12-13 03:49:06.272539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.499 qpair failed and we were unable to recover it. 00:38:05.499 [2024-12-13 03:49:06.272695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.499 [2024-12-13 03:49:06.272707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.499 qpair failed and we were unable to recover it. 00:38:05.499 [2024-12-13 03:49:06.272846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.499 [2024-12-13 03:49:06.272859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.499 qpair failed and we were unable to recover it. 00:38:05.499 [2024-12-13 03:49:06.272956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.499 [2024-12-13 03:49:06.272969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.499 qpair failed and we were unable to recover it. 00:38:05.499 [2024-12-13 03:49:06.273114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.499 [2024-12-13 03:49:06.273150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.499 qpair failed and we were unable to recover it. 00:38:05.499 [2024-12-13 03:49:06.273303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.499 [2024-12-13 03:49:06.273346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.499 qpair failed and we were unable to recover it. 00:38:05.499 [2024-12-13 03:49:06.273486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.499 [2024-12-13 03:49:06.273526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.499 qpair failed and we were unable to recover it. 00:38:05.499 [2024-12-13 03:49:06.273669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.499 [2024-12-13 03:49:06.273709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.499 qpair failed and we were unable to recover it. 00:38:05.499 [2024-12-13 03:49:06.273858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.499 [2024-12-13 03:49:06.273872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.499 qpair failed and we were unable to recover it. 00:38:05.499 [2024-12-13 03:49:06.273962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.499 [2024-12-13 03:49:06.273975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.499 qpair failed and we were unable to recover it. 00:38:05.499 [2024-12-13 03:49:06.274110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.499 [2024-12-13 03:49:06.274128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.499 qpair failed and we were unable to recover it. 00:38:05.499 [2024-12-13 03:49:06.274285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.499 [2024-12-13 03:49:06.274298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.499 qpair failed and we were unable to recover it. 00:38:05.499 [2024-12-13 03:49:06.274519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.499 [2024-12-13 03:49:06.274561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.499 qpair failed and we were unable to recover it. 00:38:05.499 [2024-12-13 03:49:06.274771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.499 [2024-12-13 03:49:06.274813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.499 qpair failed and we were unable to recover it. 00:38:05.499 [2024-12-13 03:49:06.275040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.499 [2024-12-13 03:49:06.275082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.499 qpair failed and we were unable to recover it. 00:38:05.499 [2024-12-13 03:49:06.275226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.499 [2024-12-13 03:49:06.275267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.499 qpair failed and we were unable to recover it. 00:38:05.499 [2024-12-13 03:49:06.275468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.499 [2024-12-13 03:49:06.275509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.499 qpair failed and we were unable to recover it. 00:38:05.499 [2024-12-13 03:49:06.275716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.499 [2024-12-13 03:49:06.275757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.499 qpair failed and we were unable to recover it. 00:38:05.499 [2024-12-13 03:49:06.275895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.499 [2024-12-13 03:49:06.275945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.499 qpair failed and we were unable to recover it. 00:38:05.499 [2024-12-13 03:49:06.276154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.499 [2024-12-13 03:49:06.276195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.499 qpair failed and we were unable to recover it. 00:38:05.499 [2024-12-13 03:49:06.276337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.499 [2024-12-13 03:49:06.276390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.499 qpair failed and we were unable to recover it. 00:38:05.499 [2024-12-13 03:49:06.276595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.500 [2024-12-13 03:49:06.276608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.500 qpair failed and we were unable to recover it. 00:38:05.500 [2024-12-13 03:49:06.276759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.500 [2024-12-13 03:49:06.276772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.500 qpair failed and we were unable to recover it. 00:38:05.500 [2024-12-13 03:49:06.276837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.500 [2024-12-13 03:49:06.276849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.500 qpair failed and we were unable to recover it. 00:38:05.500 [2024-12-13 03:49:06.277065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.500 [2024-12-13 03:49:06.277078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.500 qpair failed and we were unable to recover it. 00:38:05.500 [2024-12-13 03:49:06.277294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.500 [2024-12-13 03:49:06.277308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.500 qpair failed and we were unable to recover it. 00:38:05.500 [2024-12-13 03:49:06.277444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.500 [2024-12-13 03:49:06.277457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.500 qpair failed and we were unable to recover it. 00:38:05.500 [2024-12-13 03:49:06.277551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.500 [2024-12-13 03:49:06.277564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.500 qpair failed and we were unable to recover it. 00:38:05.500 [2024-12-13 03:49:06.277649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.500 [2024-12-13 03:49:06.277661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.500 qpair failed and we were unable to recover it. 00:38:05.500 [2024-12-13 03:49:06.277799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.500 [2024-12-13 03:49:06.277812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.500 qpair failed and we were unable to recover it. 00:38:05.500 [2024-12-13 03:49:06.277953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.500 [2024-12-13 03:49:06.277967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.500 qpair failed and we were unable to recover it. 00:38:05.500 [2024-12-13 03:49:06.278181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.500 [2024-12-13 03:49:06.278222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.500 qpair failed and we were unable to recover it. 00:38:05.500 [2024-12-13 03:49:06.278437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.500 [2024-12-13 03:49:06.278479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.500 qpair failed and we were unable to recover it. 00:38:05.500 [2024-12-13 03:49:06.278616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.500 [2024-12-13 03:49:06.278659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.500 qpair failed and we were unable to recover it. 00:38:05.500 [2024-12-13 03:49:06.278872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.500 [2024-12-13 03:49:06.278885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.500 qpair failed and we were unable to recover it. 00:38:05.500 [2024-12-13 03:49:06.278969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.500 [2024-12-13 03:49:06.278982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.500 qpair failed and we were unable to recover it. 00:38:05.500 [2024-12-13 03:49:06.279126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.500 [2024-12-13 03:49:06.279139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.500 qpair failed and we were unable to recover it. 00:38:05.500 [2024-12-13 03:49:06.279350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.500 [2024-12-13 03:49:06.279391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.500 qpair failed and we were unable to recover it. 00:38:05.500 [2024-12-13 03:49:06.279531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.500 [2024-12-13 03:49:06.279578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.500 qpair failed and we were unable to recover it. 00:38:05.500 [2024-12-13 03:49:06.279847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.500 [2024-12-13 03:49:06.279888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.500 qpair failed and we were unable to recover it. 00:38:05.500 [2024-12-13 03:49:06.280050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.500 [2024-12-13 03:49:06.280093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.500 qpair failed and we were unable to recover it. 00:38:05.500 [2024-12-13 03:49:06.280404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.500 [2024-12-13 03:49:06.280445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.500 qpair failed and we were unable to recover it. 00:38:05.500 [2024-12-13 03:49:06.280646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.500 [2024-12-13 03:49:06.280687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.500 qpair failed and we were unable to recover it. 00:38:05.500 [2024-12-13 03:49:06.280942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.500 [2024-12-13 03:49:06.280983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.500 qpair failed and we were unable to recover it. 00:38:05.500 [2024-12-13 03:49:06.281129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.500 [2024-12-13 03:49:06.281170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.500 qpair failed and we were unable to recover it. 00:38:05.500 [2024-12-13 03:49:06.281437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.500 [2024-12-13 03:49:06.281479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.500 qpair failed and we were unable to recover it. 00:38:05.500 [2024-12-13 03:49:06.281630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.500 [2024-12-13 03:49:06.281671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.500 qpair failed and we were unable to recover it. 00:38:05.500 [2024-12-13 03:49:06.281884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.500 [2024-12-13 03:49:06.281948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.500 qpair failed and we were unable to recover it. 00:38:05.500 [2024-12-13 03:49:06.282211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.500 [2024-12-13 03:49:06.282251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.500 qpair failed and we were unable to recover it. 00:38:05.500 [2024-12-13 03:49:06.282460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.500 [2024-12-13 03:49:06.282503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.500 qpair failed and we were unable to recover it. 00:38:05.500 [2024-12-13 03:49:06.282703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.500 [2024-12-13 03:49:06.282717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.500 qpair failed and we were unable to recover it. 00:38:05.500 [2024-12-13 03:49:06.282893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.500 [2024-12-13 03:49:06.282948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.500 qpair failed and we were unable to recover it. 00:38:05.500 [2024-12-13 03:49:06.283166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.500 [2024-12-13 03:49:06.283208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.500 qpair failed and we were unable to recover it. 00:38:05.500 [2024-12-13 03:49:06.283344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.500 [2024-12-13 03:49:06.283386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.500 qpair failed and we were unable to recover it. 00:38:05.500 [2024-12-13 03:49:06.283574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.500 [2024-12-13 03:49:06.283587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.500 qpair failed and we were unable to recover it. 00:38:05.500 [2024-12-13 03:49:06.283741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.500 [2024-12-13 03:49:06.283754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.500 qpair failed and we were unable to recover it. 00:38:05.501 [2024-12-13 03:49:06.283893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.501 [2024-12-13 03:49:06.283907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.501 qpair failed and we were unable to recover it. 00:38:05.501 [2024-12-13 03:49:06.284132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.501 [2024-12-13 03:49:06.284145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.501 qpair failed and we were unable to recover it. 00:38:05.501 [2024-12-13 03:49:06.284213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.501 [2024-12-13 03:49:06.284225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.501 qpair failed and we were unable to recover it. 00:38:05.501 [2024-12-13 03:49:06.284362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.501 [2024-12-13 03:49:06.284375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.501 qpair failed and we were unable to recover it. 00:38:05.501 [2024-12-13 03:49:06.284531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.501 [2024-12-13 03:49:06.284545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.501 qpair failed and we were unable to recover it. 00:38:05.501 [2024-12-13 03:49:06.284642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.501 [2024-12-13 03:49:06.284655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.501 qpair failed and we were unable to recover it. 00:38:05.501 [2024-12-13 03:49:06.284792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.501 [2024-12-13 03:49:06.284805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.501 qpair failed and we were unable to recover it. 00:38:05.501 [2024-12-13 03:49:06.284958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.501 [2024-12-13 03:49:06.284972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.501 qpair failed and we were unable to recover it. 00:38:05.501 [2024-12-13 03:49:06.285119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.501 [2024-12-13 03:49:06.285133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.501 qpair failed and we were unable to recover it. 00:38:05.501 [2024-12-13 03:49:06.285204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.501 [2024-12-13 03:49:06.285217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.501 qpair failed and we were unable to recover it. 00:38:05.501 [2024-12-13 03:49:06.285371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.501 [2024-12-13 03:49:06.285385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.501 qpair failed and we were unable to recover it. 00:38:05.501 [2024-12-13 03:49:06.285529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.501 [2024-12-13 03:49:06.285566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.501 qpair failed and we were unable to recover it. 00:38:05.501 [2024-12-13 03:49:06.285717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.501 [2024-12-13 03:49:06.285759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.501 qpair failed and we were unable to recover it. 00:38:05.501 [2024-12-13 03:49:06.286025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.501 [2024-12-13 03:49:06.286069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.501 qpair failed and we were unable to recover it. 00:38:05.501 [2024-12-13 03:49:06.286340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.501 [2024-12-13 03:49:06.286382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.501 qpair failed and we were unable to recover it. 00:38:05.501 [2024-12-13 03:49:06.286655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.501 [2024-12-13 03:49:06.286668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.501 qpair failed and we were unable to recover it. 00:38:05.501 [2024-12-13 03:49:06.286803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.501 [2024-12-13 03:49:06.286816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.501 qpair failed and we were unable to recover it. 00:38:05.501 [2024-12-13 03:49:06.287035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.501 [2024-12-13 03:49:06.287050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.501 qpair failed and we were unable to recover it. 00:38:05.501 [2024-12-13 03:49:06.287263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.501 [2024-12-13 03:49:06.287292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.501 qpair failed and we were unable to recover it. 00:38:05.501 [2024-12-13 03:49:06.287443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.501 [2024-12-13 03:49:06.287486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.501 qpair failed and we were unable to recover it. 00:38:05.501 [2024-12-13 03:49:06.287635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.501 [2024-12-13 03:49:06.287675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.501 qpair failed and we were unable to recover it. 00:38:05.501 [2024-12-13 03:49:06.287939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.501 [2024-12-13 03:49:06.287982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.501 qpair failed and we were unable to recover it. 00:38:05.501 [2024-12-13 03:49:06.288174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.501 [2024-12-13 03:49:06.288223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.501 qpair failed and we were unable to recover it. 00:38:05.501 [2024-12-13 03:49:06.288487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.501 [2024-12-13 03:49:06.288528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.501 qpair failed and we were unable to recover it. 00:38:05.501 [2024-12-13 03:49:06.288743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.501 [2024-12-13 03:49:06.288785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.501 qpair failed and we were unable to recover it. 00:38:05.501 [2024-12-13 03:49:06.288936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.501 [2024-12-13 03:49:06.288966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.501 qpair failed and we were unable to recover it. 00:38:05.501 [2024-12-13 03:49:06.289183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.501 [2024-12-13 03:49:06.289224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.501 qpair failed and we were unable to recover it. 00:38:05.501 [2024-12-13 03:49:06.289503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.501 [2024-12-13 03:49:06.289545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.501 qpair failed and we were unable to recover it. 00:38:05.501 [2024-12-13 03:49:06.289747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.501 [2024-12-13 03:49:06.289788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.502 qpair failed and we were unable to recover it. 00:38:05.502 [2024-12-13 03:49:06.289916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.502 [2024-12-13 03:49:06.289993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.502 qpair failed and we were unable to recover it. 00:38:05.502 [2024-12-13 03:49:06.290197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.502 [2024-12-13 03:49:06.290239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.502 qpair failed and we were unable to recover it. 00:38:05.502 [2024-12-13 03:49:06.290506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.502 [2024-12-13 03:49:06.290547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.502 qpair failed and we were unable to recover it. 00:38:05.502 [2024-12-13 03:49:06.290682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.502 [2024-12-13 03:49:06.290724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.502 qpair failed and we were unable to recover it. 00:38:05.502 [2024-12-13 03:49:06.290931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.502 [2024-12-13 03:49:06.290944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.502 qpair failed and we were unable to recover it. 00:38:05.502 [2024-12-13 03:49:06.291083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.502 [2024-12-13 03:49:06.291096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.502 qpair failed and we were unable to recover it. 00:38:05.502 [2024-12-13 03:49:06.291331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.502 [2024-12-13 03:49:06.291372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.502 qpair failed and we were unable to recover it. 00:38:05.502 [2024-12-13 03:49:06.291636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.502 [2024-12-13 03:49:06.291677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.502 qpair failed and we were unable to recover it. 00:38:05.502 [2024-12-13 03:49:06.291927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.502 [2024-12-13 03:49:06.291972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.502 qpair failed and we were unable to recover it. 00:38:05.502 [2024-12-13 03:49:06.292120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.502 [2024-12-13 03:49:06.292161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.502 qpair failed and we were unable to recover it. 00:38:05.502 [2024-12-13 03:49:06.292302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.502 [2024-12-13 03:49:06.292342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.502 qpair failed and we were unable to recover it. 00:38:05.502 [2024-12-13 03:49:06.292529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.502 [2024-12-13 03:49:06.292542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.502 qpair failed and we were unable to recover it. 00:38:05.502 [2024-12-13 03:49:06.292720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.502 [2024-12-13 03:49:06.292733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.502 qpair failed and we were unable to recover it. 00:38:05.502 [2024-12-13 03:49:06.292897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.502 [2024-12-13 03:49:06.292947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.502 qpair failed and we were unable to recover it. 00:38:05.502 [2024-12-13 03:49:06.293080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.502 [2024-12-13 03:49:06.293120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.502 qpair failed and we were unable to recover it. 00:38:05.502 [2024-12-13 03:49:06.293350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.502 [2024-12-13 03:49:06.293393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.502 qpair failed and we were unable to recover it. 00:38:05.502 [2024-12-13 03:49:06.293552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.502 [2024-12-13 03:49:06.293601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.502 qpair failed and we were unable to recover it. 00:38:05.502 [2024-12-13 03:49:06.293668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.502 [2024-12-13 03:49:06.293680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.502 qpair failed and we were unable to recover it. 00:38:05.502 [2024-12-13 03:49:06.293881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.502 [2024-12-13 03:49:06.293895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.502 qpair failed and we were unable to recover it. 00:38:05.502 [2024-12-13 03:49:06.294050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.502 [2024-12-13 03:49:06.294064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.502 qpair failed and we were unable to recover it. 00:38:05.502 [2024-12-13 03:49:06.294231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.502 [2024-12-13 03:49:06.294244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.502 qpair failed and we were unable to recover it. 00:38:05.502 [2024-12-13 03:49:06.294399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.502 [2024-12-13 03:49:06.294454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.502 qpair failed and we were unable to recover it. 00:38:05.502 [2024-12-13 03:49:06.294606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.502 [2024-12-13 03:49:06.294646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.502 qpair failed and we were unable to recover it. 00:38:05.502 [2024-12-13 03:49:06.294787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.502 [2024-12-13 03:49:06.294828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.502 qpair failed and we were unable to recover it. 00:38:05.502 [2024-12-13 03:49:06.294971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.502 [2024-12-13 03:49:06.295013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.502 qpair failed and we were unable to recover it. 00:38:05.502 [2024-12-13 03:49:06.295207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.502 [2024-12-13 03:49:06.295249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.502 qpair failed and we were unable to recover it. 00:38:05.502 [2024-12-13 03:49:06.295505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.502 [2024-12-13 03:49:06.295545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.502 qpair failed and we were unable to recover it. 00:38:05.502 [2024-12-13 03:49:06.295738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.502 [2024-12-13 03:49:06.295780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.502 qpair failed and we were unable to recover it. 00:38:05.502 [2024-12-13 03:49:06.295930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.502 [2024-12-13 03:49:06.295944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.502 qpair failed and we were unable to recover it. 00:38:05.502 [2024-12-13 03:49:06.296038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.502 [2024-12-13 03:49:06.296050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.502 qpair failed and we were unable to recover it. 00:38:05.502 [2024-12-13 03:49:06.296253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.502 [2024-12-13 03:49:06.296267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.502 qpair failed and we were unable to recover it. 00:38:05.502 [2024-12-13 03:49:06.296401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.502 [2024-12-13 03:49:06.296422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.502 qpair failed and we were unable to recover it. 00:38:05.502 [2024-12-13 03:49:06.296564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.502 [2024-12-13 03:49:06.296577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.502 qpair failed and we were unable to recover it. 00:38:05.502 [2024-12-13 03:49:06.296655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.502 [2024-12-13 03:49:06.296670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.502 qpair failed and we were unable to recover it. 00:38:05.502 [2024-12-13 03:49:06.296908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.502 [2024-12-13 03:49:06.296925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.503 qpair failed and we were unable to recover it. 00:38:05.503 [2024-12-13 03:49:06.297131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.503 [2024-12-13 03:49:06.297145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.503 qpair failed and we were unable to recover it. 00:38:05.503 [2024-12-13 03:49:06.297311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.503 [2024-12-13 03:49:06.297325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.503 qpair failed and we were unable to recover it. 00:38:05.503 [2024-12-13 03:49:06.297458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.503 [2024-12-13 03:49:06.297471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.503 qpair failed and we were unable to recover it. 00:38:05.503 [2024-12-13 03:49:06.297552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.503 [2024-12-13 03:49:06.297566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.503 qpair failed and we were unable to recover it. 00:38:05.503 [2024-12-13 03:49:06.297712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.503 [2024-12-13 03:49:06.297724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.503 qpair failed and we were unable to recover it. 00:38:05.503 [2024-12-13 03:49:06.297895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.503 [2024-12-13 03:49:06.297942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.503 qpair failed and we were unable to recover it. 00:38:05.503 [2024-12-13 03:49:06.298141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.503 [2024-12-13 03:49:06.298181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.503 qpair failed and we were unable to recover it. 00:38:05.503 [2024-12-13 03:49:06.298484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.503 [2024-12-13 03:49:06.298535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.503 qpair failed and we were unable to recover it. 00:38:05.503 [2024-12-13 03:49:06.298633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.503 [2024-12-13 03:49:06.298646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.503 qpair failed and we were unable to recover it. 00:38:05.503 [2024-12-13 03:49:06.298789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.503 [2024-12-13 03:49:06.298803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.503 qpair failed and we were unable to recover it. 00:38:05.503 [2024-12-13 03:49:06.298892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.503 [2024-12-13 03:49:06.298904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.503 qpair failed and we were unable to recover it. 00:38:05.503 [2024-12-13 03:49:06.299179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.503 [2024-12-13 03:49:06.299263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.503 qpair failed and we were unable to recover it. 00:38:05.503 [2024-12-13 03:49:06.299660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.503 [2024-12-13 03:49:06.299740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.503 qpair failed and we were unable to recover it. 00:38:05.503 [2024-12-13 03:49:06.299934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.503 [2024-12-13 03:49:06.299979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.503 qpair failed and we were unable to recover it. 00:38:05.503 [2024-12-13 03:49:06.300161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.503 [2024-12-13 03:49:06.300176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.503 qpair failed and we were unable to recover it. 00:38:05.503 [2024-12-13 03:49:06.300352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.503 [2024-12-13 03:49:06.300388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.503 qpair failed and we were unable to recover it. 00:38:05.503 [2024-12-13 03:49:06.300593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.503 [2024-12-13 03:49:06.300635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.503 qpair failed and we were unable to recover it. 00:38:05.503 [2024-12-13 03:49:06.300912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.503 [2024-12-13 03:49:06.300965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.503 qpair failed and we were unable to recover it. 00:38:05.503 [2024-12-13 03:49:06.301266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.503 [2024-12-13 03:49:06.301307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.503 qpair failed and we were unable to recover it. 00:38:05.503 [2024-12-13 03:49:06.301443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.503 [2024-12-13 03:49:06.301485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.503 qpair failed and we were unable to recover it. 00:38:05.503 [2024-12-13 03:49:06.301698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.503 [2024-12-13 03:49:06.301714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.503 qpair failed and we were unable to recover it. 00:38:05.503 [2024-12-13 03:49:06.301923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.503 [2024-12-13 03:49:06.301942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.503 qpair failed and we were unable to recover it. 00:38:05.503 [2024-12-13 03:49:06.302120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.503 [2024-12-13 03:49:06.302160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.503 qpair failed and we were unable to recover it. 00:38:05.503 [2024-12-13 03:49:06.302370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.503 [2024-12-13 03:49:06.302410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.503 qpair failed and we were unable to recover it. 00:38:05.503 [2024-12-13 03:49:06.302642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.503 [2024-12-13 03:49:06.302682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.503 qpair failed and we were unable to recover it. 00:38:05.503 [2024-12-13 03:49:06.302892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.503 [2024-12-13 03:49:06.302957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.503 qpair failed and we were unable to recover it. 00:38:05.503 [2024-12-13 03:49:06.303172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.503 [2024-12-13 03:49:06.303216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.503 qpair failed and we were unable to recover it. 00:38:05.503 [2024-12-13 03:49:06.303424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.503 [2024-12-13 03:49:06.303469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.503 qpair failed and we were unable to recover it. 00:38:05.503 [2024-12-13 03:49:06.303728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.503 [2024-12-13 03:49:06.303771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.503 qpair failed and we were unable to recover it. 00:38:05.503 [2024-12-13 03:49:06.304033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.503 [2024-12-13 03:49:06.304076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.503 qpair failed and we were unable to recover it. 00:38:05.503 [2024-12-13 03:49:06.304220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.503 [2024-12-13 03:49:06.304262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.503 qpair failed and we were unable to recover it. 00:38:05.503 [2024-12-13 03:49:06.304392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.503 [2024-12-13 03:49:06.304434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.503 qpair failed and we were unable to recover it. 00:38:05.503 [2024-12-13 03:49:06.304632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.503 [2024-12-13 03:49:06.304674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.503 qpair failed and we were unable to recover it. 00:38:05.503 [2024-12-13 03:49:06.304842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.503 [2024-12-13 03:49:06.304862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.503 qpair failed and we were unable to recover it. 00:38:05.503 [2024-12-13 03:49:06.305017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.503 [2024-12-13 03:49:06.305032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.503 qpair failed and we were unable to recover it. 00:38:05.503 [2024-12-13 03:49:06.305279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.503 [2024-12-13 03:49:06.305322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.503 qpair failed and we were unable to recover it. 00:38:05.503 [2024-12-13 03:49:06.305535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.503 [2024-12-13 03:49:06.305576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.504 qpair failed and we were unable to recover it. 00:38:05.504 [2024-12-13 03:49:06.305732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.504 [2024-12-13 03:49:06.305774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.504 qpair failed and we were unable to recover it. 00:38:05.504 [2024-12-13 03:49:06.305942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.504 [2024-12-13 03:49:06.305959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.504 qpair failed and we were unable to recover it. 00:38:05.504 [2024-12-13 03:49:06.306110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.504 [2024-12-13 03:49:06.306122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.504 qpair failed and we were unable to recover it. 00:38:05.504 [2024-12-13 03:49:06.306212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.504 [2024-12-13 03:49:06.306224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.504 qpair failed and we were unable to recover it. 00:38:05.504 [2024-12-13 03:49:06.306388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.504 [2024-12-13 03:49:06.306401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.504 qpair failed and we were unable to recover it. 00:38:05.504 [2024-12-13 03:49:06.306550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.504 [2024-12-13 03:49:06.306564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.504 qpair failed and we were unable to recover it. 00:38:05.504 [2024-12-13 03:49:06.306701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.504 [2024-12-13 03:49:06.306714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.504 qpair failed and we were unable to recover it. 00:38:05.504 [2024-12-13 03:49:06.306882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.504 [2024-12-13 03:49:06.306895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.504 qpair failed and we were unable to recover it. 00:38:05.504 [2024-12-13 03:49:06.307044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.504 [2024-12-13 03:49:06.307082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.504 qpair failed and we were unable to recover it. 00:38:05.504 [2024-12-13 03:49:06.307227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.504 [2024-12-13 03:49:06.307268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.504 qpair failed and we were unable to recover it. 00:38:05.504 [2024-12-13 03:49:06.307466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.504 [2024-12-13 03:49:06.307508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.504 qpair failed and we were unable to recover it. 00:38:05.504 [2024-12-13 03:49:06.307651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.504 [2024-12-13 03:49:06.307664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.504 qpair failed and we were unable to recover it. 00:38:05.504 [2024-12-13 03:49:06.307811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.504 [2024-12-13 03:49:06.307825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.504 qpair failed and we were unable to recover it. 00:38:05.504 [2024-12-13 03:49:06.307995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.504 [2024-12-13 03:49:06.308010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.504 qpair failed and we were unable to recover it. 00:38:05.504 [2024-12-13 03:49:06.308161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.504 [2024-12-13 03:49:06.308175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.504 qpair failed and we were unable to recover it. 00:38:05.504 [2024-12-13 03:49:06.308259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.504 [2024-12-13 03:49:06.308271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.504 qpair failed and we were unable to recover it. 00:38:05.504 [2024-12-13 03:49:06.308374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.504 [2024-12-13 03:49:06.308388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.504 qpair failed and we were unable to recover it. 00:38:05.504 [2024-12-13 03:49:06.308535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.504 [2024-12-13 03:49:06.308549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.504 qpair failed and we were unable to recover it. 00:38:05.504 [2024-12-13 03:49:06.308718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.504 [2024-12-13 03:49:06.308747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.504 qpair failed and we were unable to recover it. 00:38:05.504 [2024-12-13 03:49:06.308953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.504 [2024-12-13 03:49:06.308995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.504 qpair failed and we were unable to recover it. 00:38:05.504 [2024-12-13 03:49:06.309216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.504 [2024-12-13 03:49:06.309258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.504 qpair failed and we were unable to recover it. 00:38:05.504 [2024-12-13 03:49:06.309545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.504 [2024-12-13 03:49:06.309586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.504 qpair failed and we were unable to recover it. 00:38:05.504 [2024-12-13 03:49:06.309884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.504 [2024-12-13 03:49:06.309978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.504 qpair failed and we were unable to recover it. 00:38:05.504 [2024-12-13 03:49:06.310258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.504 [2024-12-13 03:49:06.310300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.504 qpair failed and we were unable to recover it. 00:38:05.504 [2024-12-13 03:49:06.310435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.504 [2024-12-13 03:49:06.310449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.504 qpair failed and we were unable to recover it. 00:38:05.504 [2024-12-13 03:49:06.310532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.504 [2024-12-13 03:49:06.310544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.504 qpair failed and we were unable to recover it. 00:38:05.504 [2024-12-13 03:49:06.310774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.504 [2024-12-13 03:49:06.310787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.504 qpair failed and we were unable to recover it. 00:38:05.504 [2024-12-13 03:49:06.310926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.504 [2024-12-13 03:49:06.310940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.504 qpair failed and we were unable to recover it. 00:38:05.504 [2024-12-13 03:49:06.311034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.504 [2024-12-13 03:49:06.311047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.504 qpair failed and we were unable to recover it. 00:38:05.504 [2024-12-13 03:49:06.311197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.504 [2024-12-13 03:49:06.311210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.504 qpair failed and we were unable to recover it. 00:38:05.504 [2024-12-13 03:49:06.311354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.504 [2024-12-13 03:49:06.311368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.504 qpair failed and we were unable to recover it. 00:38:05.504 [2024-12-13 03:49:06.311446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.504 [2024-12-13 03:49:06.311459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.504 qpair failed and we were unable to recover it. 00:38:05.504 [2024-12-13 03:49:06.311622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.504 [2024-12-13 03:49:06.311636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.504 qpair failed and we were unable to recover it. 00:38:05.504 [2024-12-13 03:49:06.311772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.504 [2024-12-13 03:49:06.311785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.504 qpair failed and we were unable to recover it. 00:38:05.504 [2024-12-13 03:49:06.311948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.504 [2024-12-13 03:49:06.311990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.504 qpair failed and we were unable to recover it. 00:38:05.504 [2024-12-13 03:49:06.312221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.504 [2024-12-13 03:49:06.312267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.504 qpair failed and we were unable to recover it. 00:38:05.504 [2024-12-13 03:49:06.312471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.505 [2024-12-13 03:49:06.312514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.505 qpair failed and we were unable to recover it. 00:38:05.505 [2024-12-13 03:49:06.312746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.505 [2024-12-13 03:49:06.312760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.505 qpair failed and we were unable to recover it. 00:38:05.505 [2024-12-13 03:49:06.312856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.505 [2024-12-13 03:49:06.312868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.505 qpair failed and we were unable to recover it. 00:38:05.505 [2024-12-13 03:49:06.313023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.505 [2024-12-13 03:49:06.313037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.505 qpair failed and we were unable to recover it. 00:38:05.505 [2024-12-13 03:49:06.313115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.505 [2024-12-13 03:49:06.313128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.505 qpair failed and we were unable to recover it. 00:38:05.505 [2024-12-13 03:49:06.313284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.505 [2024-12-13 03:49:06.313297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.505 qpair failed and we were unable to recover it. 00:38:05.505 [2024-12-13 03:49:06.313460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.505 [2024-12-13 03:49:06.313473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.505 qpair failed and we were unable to recover it. 00:38:05.505 [2024-12-13 03:49:06.313637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.505 [2024-12-13 03:49:06.313679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.505 qpair failed and we were unable to recover it. 00:38:05.505 [2024-12-13 03:49:06.313893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.505 [2024-12-13 03:49:06.313945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.505 qpair failed and we were unable to recover it. 00:38:05.505 [2024-12-13 03:49:06.314256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.505 [2024-12-13 03:49:06.314298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.505 qpair failed and we were unable to recover it. 00:38:05.505 [2024-12-13 03:49:06.314524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.505 [2024-12-13 03:49:06.314566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.505 qpair failed and we were unable to recover it. 00:38:05.505 [2024-12-13 03:49:06.314889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.505 [2024-12-13 03:49:06.314962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.505 qpair failed and we were unable to recover it. 00:38:05.505 [2024-12-13 03:49:06.315124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.505 [2024-12-13 03:49:06.315171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.505 qpair failed and we were unable to recover it. 00:38:05.505 [2024-12-13 03:49:06.315442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.505 [2024-12-13 03:49:06.315485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.505 qpair failed and we were unable to recover it. 00:38:05.505 [2024-12-13 03:49:06.315664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.505 [2024-12-13 03:49:06.315684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.505 qpair failed and we were unable to recover it. 00:38:05.505 [2024-12-13 03:49:06.315906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.505 [2024-12-13 03:49:06.315958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.505 qpair failed and we were unable to recover it. 00:38:05.505 [2024-12-13 03:49:06.316096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.505 [2024-12-13 03:49:06.316138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.505 qpair failed and we were unable to recover it. 00:38:05.505 [2024-12-13 03:49:06.316352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.505 [2024-12-13 03:49:06.316395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.505 qpair failed and we were unable to recover it. 00:38:05.505 [2024-12-13 03:49:06.316623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.505 [2024-12-13 03:49:06.316644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.505 qpair failed and we were unable to recover it. 00:38:05.505 [2024-12-13 03:49:06.316744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.505 [2024-12-13 03:49:06.316765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.505 qpair failed and we were unable to recover it. 00:38:05.505 [2024-12-13 03:49:06.316846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.505 [2024-12-13 03:49:06.316861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.505 qpair failed and we were unable to recover it. 00:38:05.505 [2024-12-13 03:49:06.317021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.505 [2024-12-13 03:49:06.317035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.505 qpair failed and we were unable to recover it. 00:38:05.505 [2024-12-13 03:49:06.317120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.505 [2024-12-13 03:49:06.317133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.505 qpair failed and we were unable to recover it. 00:38:05.505 [2024-12-13 03:49:06.317282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.505 [2024-12-13 03:49:06.317295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.505 qpair failed and we were unable to recover it. 00:38:05.505 [2024-12-13 03:49:06.317517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.505 [2024-12-13 03:49:06.317530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.505 qpair failed and we were unable to recover it. 00:38:05.505 [2024-12-13 03:49:06.317672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.505 [2024-12-13 03:49:06.317685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.505 qpair failed and we were unable to recover it. 00:38:05.505 [2024-12-13 03:49:06.317853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.505 [2024-12-13 03:49:06.317866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.505 qpair failed and we were unable to recover it. 00:38:05.505 [2024-12-13 03:49:06.318003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.505 [2024-12-13 03:49:06.318016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.505 qpair failed and we were unable to recover it. 00:38:05.505 [2024-12-13 03:49:06.318205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.505 [2024-12-13 03:49:06.318218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.505 qpair failed and we were unable to recover it. 00:38:05.505 [2024-12-13 03:49:06.318313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.505 [2024-12-13 03:49:06.318325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.505 qpair failed and we were unable to recover it. 00:38:05.505 [2024-12-13 03:49:06.318413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.505 [2024-12-13 03:49:06.318434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.505 qpair failed and we were unable to recover it. 00:38:05.505 [2024-12-13 03:49:06.318581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.505 [2024-12-13 03:49:06.318595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.505 qpair failed and we were unable to recover it. 00:38:05.505 [2024-12-13 03:49:06.318739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.505 [2024-12-13 03:49:06.318755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.505 qpair failed and we were unable to recover it. 00:38:05.505 [2024-12-13 03:49:06.318840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.505 [2024-12-13 03:49:06.318853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.505 qpair failed and we were unable to recover it. 00:38:05.505 [2024-12-13 03:49:06.318930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.505 [2024-12-13 03:49:06.318942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.505 qpair failed and we were unable to recover it. 00:38:05.505 [2024-12-13 03:49:06.319088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.505 [2024-12-13 03:49:06.319101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.505 qpair failed and we were unable to recover it. 00:38:05.505 [2024-12-13 03:49:06.319302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.505 [2024-12-13 03:49:06.319315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.505 qpair failed and we were unable to recover it. 00:38:05.505 [2024-12-13 03:49:06.319474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.506 [2024-12-13 03:49:06.319488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.506 qpair failed and we were unable to recover it. 00:38:05.506 [2024-12-13 03:49:06.319619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.506 [2024-12-13 03:49:06.319632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.506 qpair failed and we were unable to recover it. 00:38:05.506 [2024-12-13 03:49:06.319834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.506 [2024-12-13 03:49:06.319847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.506 qpair failed and we were unable to recover it. 00:38:05.506 [2024-12-13 03:49:06.320018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.506 [2024-12-13 03:49:06.320032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.506 qpair failed and we were unable to recover it. 00:38:05.506 [2024-12-13 03:49:06.320107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.506 [2024-12-13 03:49:06.320120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.506 qpair failed and we were unable to recover it. 00:38:05.506 [2024-12-13 03:49:06.320273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.506 [2024-12-13 03:49:06.320286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.506 qpair failed and we were unable to recover it. 00:38:05.506 [2024-12-13 03:49:06.320496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.506 [2024-12-13 03:49:06.320537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.506 qpair failed and we were unable to recover it. 00:38:05.506 [2024-12-13 03:49:06.320742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.506 [2024-12-13 03:49:06.320783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.506 qpair failed and we were unable to recover it. 00:38:05.506 [2024-12-13 03:49:06.320941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.506 [2024-12-13 03:49:06.320984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.506 qpair failed and we were unable to recover it. 00:38:05.506 [2024-12-13 03:49:06.321270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.506 [2024-12-13 03:49:06.321313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.506 qpair failed and we were unable to recover it. 00:38:05.506 [2024-12-13 03:49:06.321517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.506 [2024-12-13 03:49:06.321558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.506 qpair failed and we were unable to recover it. 00:38:05.506 [2024-12-13 03:49:06.321763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.506 [2024-12-13 03:49:06.321804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.506 qpair failed and we were unable to recover it. 00:38:05.506 [2024-12-13 03:49:06.322013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.506 [2024-12-13 03:49:06.322026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.506 qpair failed and we were unable to recover it. 00:38:05.506 [2024-12-13 03:49:06.322195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.506 [2024-12-13 03:49:06.322209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.506 qpair failed and we were unable to recover it. 00:38:05.506 [2024-12-13 03:49:06.322348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.506 [2024-12-13 03:49:06.322361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.506 qpair failed and we were unable to recover it. 00:38:05.506 [2024-12-13 03:49:06.322426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.506 [2024-12-13 03:49:06.322439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.506 qpair failed and we were unable to recover it. 00:38:05.506 [2024-12-13 03:49:06.322583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.506 [2024-12-13 03:49:06.322597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.506 qpair failed and we were unable to recover it. 00:38:05.506 [2024-12-13 03:49:06.322731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.506 [2024-12-13 03:49:06.322744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.506 qpair failed and we were unable to recover it. 00:38:05.506 [2024-12-13 03:49:06.322973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.506 [2024-12-13 03:49:06.322987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.506 qpair failed and we were unable to recover it. 00:38:05.506 [2024-12-13 03:49:06.323138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.506 [2024-12-13 03:49:06.323152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.506 qpair failed and we were unable to recover it. 00:38:05.506 [2024-12-13 03:49:06.323235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.506 [2024-12-13 03:49:06.323248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.506 qpair failed and we were unable to recover it. 00:38:05.506 [2024-12-13 03:49:06.323393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.506 [2024-12-13 03:49:06.323407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.506 qpair failed and we were unable to recover it. 00:38:05.506 [2024-12-13 03:49:06.323610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.506 [2024-12-13 03:49:06.323623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.506 qpair failed and we were unable to recover it. 00:38:05.506 [2024-12-13 03:49:06.323709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.506 [2024-12-13 03:49:06.323722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.506 qpair failed and we were unable to recover it. 00:38:05.506 [2024-12-13 03:49:06.323804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.506 [2024-12-13 03:49:06.323816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.506 qpair failed and we were unable to recover it. 00:38:05.506 [2024-12-13 03:49:06.323973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.506 [2024-12-13 03:49:06.323987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.506 qpair failed and we were unable to recover it. 00:38:05.506 [2024-12-13 03:49:06.324142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.506 [2024-12-13 03:49:06.324156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.506 qpair failed and we were unable to recover it. 00:38:05.506 [2024-12-13 03:49:06.324307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.506 [2024-12-13 03:49:06.324320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.506 qpair failed and we were unable to recover it. 00:38:05.506 [2024-12-13 03:49:06.324525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.506 [2024-12-13 03:49:06.324539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.506 qpair failed and we were unable to recover it. 00:38:05.506 [2024-12-13 03:49:06.324629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.506 [2024-12-13 03:49:06.324642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.506 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.324786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.324800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.507 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.324974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.325026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.507 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.325259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.325301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.507 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.325424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.325466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.507 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.325667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.325680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.507 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.325934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.325951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.507 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.326107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.326121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.507 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.326299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.326313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.507 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.326469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.326483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.507 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.326642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.326655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.507 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.326807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.326820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.507 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.327089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.327132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.507 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.327335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.327384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.507 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.327645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.327686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.507 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.327993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.328007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.507 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.328248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.328289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.507 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.328421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.328461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.507 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.328730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.328771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.507 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.328987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.329030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.507 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.329177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.329219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.507 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.329409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.329449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.507 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.329708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.329750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.507 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.329976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.330020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.507 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.330216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.330258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.507 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.330467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.330508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.507 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.330662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.330704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.507 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.330903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.330957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.507 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.331218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.331272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.507 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.331511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.331552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.507 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.331811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.331824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.507 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.331991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.332005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.507 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.332095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.332108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.507 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.332284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.332298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.507 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.332500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.332513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.507 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.332662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.332676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.507 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.332760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.332772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.507 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.332906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.332947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.507 qpair failed and we were unable to recover it. 00:38:05.507 [2024-12-13 03:49:06.333092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.507 [2024-12-13 03:49:06.333132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.333344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.333387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.333663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.333704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.333909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.333960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.334163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.334188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.334421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.334433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.334660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.334673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.334807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.334820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.334968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.334986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.335183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.335197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.335285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.335298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.335456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.335468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.335536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.335548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.335623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.335636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.335710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.335722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.335861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.335873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.336023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.336037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.336112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.336124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.336267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.336280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.336373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.336386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.336535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.336548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.336761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.336775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.336871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.336884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.336966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.336979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.337113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.337126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.337263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.337276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.337364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.337376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.337458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.337470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.337614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.337627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.337700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.337712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.337781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.337793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.338003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.338017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.338207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.338249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.338458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.338498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.338716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.338757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.339016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.339029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.339274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.339288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.339440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.339452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.508 qpair failed and we were unable to recover it. 00:38:05.508 [2024-12-13 03:49:06.339608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.508 [2024-12-13 03:49:06.339621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.339736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.339750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.339956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.339970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.340174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.340187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.340330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.340343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.340484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.340498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.340582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.340594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.340839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.340852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.341011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.341025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.341230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.341243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.341336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.341351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.341584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.341626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.341838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.341879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.342092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.342134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.342328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.342369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.342511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.342553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.342830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.342883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.343164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.343207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.343485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.343525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.343813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.343854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.343968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.343981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.344146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.344158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.344291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.344304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.344370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.344382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.344607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.344620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.344754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.344767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.344850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.344863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.344952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.344964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.345057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.345069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.345223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.345236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.345381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.345394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.345489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.345516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.345733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.345747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.345849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.345861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.345945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.345958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.346039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.346052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.346201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.346214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.346319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.346332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.509 [2024-12-13 03:49:06.346407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.509 [2024-12-13 03:49:06.346418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.509 qpair failed and we were unable to recover it. 00:38:05.510 [2024-12-13 03:49:06.346553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.510 [2024-12-13 03:49:06.346566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.510 qpair failed and we were unable to recover it. 00:38:05.510 [2024-12-13 03:49:06.346746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.510 [2024-12-13 03:49:06.346760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.510 qpair failed and we were unable to recover it. 00:38:05.510 [2024-12-13 03:49:06.346837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.510 [2024-12-13 03:49:06.346849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.510 qpair failed and we were unable to recover it. 00:38:05.510 [2024-12-13 03:49:06.346983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.510 [2024-12-13 03:49:06.346997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.510 qpair failed and we were unable to recover it. 00:38:05.510 [2024-12-13 03:49:06.347077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.510 [2024-12-13 03:49:06.347089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.510 qpair failed and we were unable to recover it. 00:38:05.510 [2024-12-13 03:49:06.347226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.510 [2024-12-13 03:49:06.347239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.510 qpair failed and we were unable to recover it. 00:38:05.510 [2024-12-13 03:49:06.347377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.510 [2024-12-13 03:49:06.347390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.510 qpair failed and we were unable to recover it. 00:38:05.510 [2024-12-13 03:49:06.347455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.510 [2024-12-13 03:49:06.347468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.510 qpair failed and we were unable to recover it. 00:38:05.510 [2024-12-13 03:49:06.347551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.510 [2024-12-13 03:49:06.347563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.510 qpair failed and we were unable to recover it. 00:38:05.510 [2024-12-13 03:49:06.347637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.510 [2024-12-13 03:49:06.347649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.510 qpair failed and we were unable to recover it. 00:38:05.510 [2024-12-13 03:49:06.347803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.510 [2024-12-13 03:49:06.347817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.510 qpair failed and we were unable to recover it. 00:38:05.510 [2024-12-13 03:49:06.347909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.510 [2024-12-13 03:49:06.347929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.510 qpair failed and we were unable to recover it. 00:38:05.510 [2024-12-13 03:49:06.348088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.510 [2024-12-13 03:49:06.348101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.510 qpair failed and we were unable to recover it. 00:38:05.510 [2024-12-13 03:49:06.348238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.510 [2024-12-13 03:49:06.348252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.510 qpair failed and we were unable to recover it. 00:38:05.510 [2024-12-13 03:49:06.348406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.510 [2024-12-13 03:49:06.348419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.510 qpair failed and we were unable to recover it. 00:38:05.510 [2024-12-13 03:49:06.348585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.510 [2024-12-13 03:49:06.348627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.510 qpair failed and we were unable to recover it. 00:38:05.510 [2024-12-13 03:49:06.348886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.510 [2024-12-13 03:49:06.348937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.510 qpair failed and we were unable to recover it. 00:38:05.510 [2024-12-13 03:49:06.349254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.510 [2024-12-13 03:49:06.349295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.510 qpair failed and we were unable to recover it. 00:38:05.510 [2024-12-13 03:49:06.349425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.510 [2024-12-13 03:49:06.349466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.510 qpair failed and we were unable to recover it. 00:38:05.510 [2024-12-13 03:49:06.349695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.510 [2024-12-13 03:49:06.349739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.510 qpair failed and we were unable to recover it. 00:38:05.510 [2024-12-13 03:49:06.349973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.510 [2024-12-13 03:49:06.349986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.510 qpair failed and we were unable to recover it. 00:38:05.510 [2024-12-13 03:49:06.350141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.510 [2024-12-13 03:49:06.350154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.510 qpair failed and we were unable to recover it. 00:38:05.510 [2024-12-13 03:49:06.350239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.510 [2024-12-13 03:49:06.350252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.510 qpair failed and we were unable to recover it. 00:38:05.510 [2024-12-13 03:49:06.350458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.510 [2024-12-13 03:49:06.350471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.510 qpair failed and we were unable to recover it. 00:38:05.510 [2024-12-13 03:49:06.350677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.510 [2024-12-13 03:49:06.350690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.510 qpair failed and we were unable to recover it. 00:38:05.510 [2024-12-13 03:49:06.350832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.510 [2024-12-13 03:49:06.350848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.510 qpair failed and we were unable to recover it. 00:38:05.510 [2024-12-13 03:49:06.350941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.510 [2024-12-13 03:49:06.350954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.510 qpair failed and we were unable to recover it. 00:38:05.510 [2024-12-13 03:49:06.351033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.510 [2024-12-13 03:49:06.351046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.510 qpair failed and we were unable to recover it. 00:38:05.510 [2024-12-13 03:49:06.351182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.510 [2024-12-13 03:49:06.351195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.510 qpair failed and we were unable to recover it. 00:38:05.510 [2024-12-13 03:49:06.351291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.510 [2024-12-13 03:49:06.351304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.510 qpair failed and we were unable to recover it. 00:38:05.510 [2024-12-13 03:49:06.351540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.510 [2024-12-13 03:49:06.351581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.510 qpair failed and we were unable to recover it. 00:38:05.510 [2024-12-13 03:49:06.351791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.510 [2024-12-13 03:49:06.351831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.510 qpair failed and we were unable to recover it. 00:38:05.510 [2024-12-13 03:49:06.351973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.510 [2024-12-13 03:49:06.352016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.510 qpair failed and we were unable to recover it. 00:38:05.510 [2024-12-13 03:49:06.352237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.510 [2024-12-13 03:49:06.352251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.510 qpair failed and we were unable to recover it. 00:38:05.510 [2024-12-13 03:49:06.352383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.510 [2024-12-13 03:49:06.352396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.510 qpair failed and we were unable to recover it. 00:38:05.510 [2024-12-13 03:49:06.352601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.510 [2024-12-13 03:49:06.352614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.510 qpair failed and we were unable to recover it. 00:38:05.510 [2024-12-13 03:49:06.352777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.511 [2024-12-13 03:49:06.352790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.511 qpair failed and we were unable to recover it. 00:38:05.511 [2024-12-13 03:49:06.352883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.511 [2024-12-13 03:49:06.352895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.511 qpair failed and we were unable to recover it. 00:38:05.511 [2024-12-13 03:49:06.353115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.511 [2024-12-13 03:49:06.353129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.511 qpair failed and we were unable to recover it. 00:38:05.511 [2024-12-13 03:49:06.353264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.511 [2024-12-13 03:49:06.353277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.511 qpair failed and we were unable to recover it. 00:38:05.511 [2024-12-13 03:49:06.353441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.511 [2024-12-13 03:49:06.353455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.511 qpair failed and we were unable to recover it. 00:38:05.511 [2024-12-13 03:49:06.353620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.511 [2024-12-13 03:49:06.353708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.511 qpair failed and we were unable to recover it. 00:38:05.511 [2024-12-13 03:49:06.353870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.511 [2024-12-13 03:49:06.353911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.511 qpair failed and we were unable to recover it. 00:38:05.511 [2024-12-13 03:49:06.354215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.511 [2024-12-13 03:49:06.354228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.511 qpair failed and we were unable to recover it. 00:38:05.511 [2024-12-13 03:49:06.354433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.511 [2024-12-13 03:49:06.354446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.511 qpair failed and we were unable to recover it. 00:38:05.511 [2024-12-13 03:49:06.354655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.511 [2024-12-13 03:49:06.354669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.511 qpair failed and we were unable to recover it. 00:38:05.511 [2024-12-13 03:49:06.354756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.511 [2024-12-13 03:49:06.354770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.511 qpair failed and we were unable to recover it. 00:38:05.511 [2024-12-13 03:49:06.354988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.511 [2024-12-13 03:49:06.355002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.511 qpair failed and we were unable to recover it. 00:38:05.511 [2024-12-13 03:49:06.355155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.511 [2024-12-13 03:49:06.355169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.511 qpair failed and we were unable to recover it. 00:38:05.511 [2024-12-13 03:49:06.355258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.511 [2024-12-13 03:49:06.355270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.511 qpair failed and we were unable to recover it. 00:38:05.511 [2024-12-13 03:49:06.355338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.511 [2024-12-13 03:49:06.355351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.511 qpair failed and we were unable to recover it. 00:38:05.511 [2024-12-13 03:49:06.355554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.511 [2024-12-13 03:49:06.355570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.511 qpair failed and we were unable to recover it. 00:38:05.511 [2024-12-13 03:49:06.355702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.511 [2024-12-13 03:49:06.355716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.511 qpair failed and we were unable to recover it. 00:38:05.511 [2024-12-13 03:49:06.355782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.511 [2024-12-13 03:49:06.355795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.511 qpair failed and we were unable to recover it. 00:38:05.511 [2024-12-13 03:49:06.355887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.511 [2024-12-13 03:49:06.355899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.511 qpair failed and we were unable to recover it. 00:38:05.511 [2024-12-13 03:49:06.356042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.511 [2024-12-13 03:49:06.356056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.511 qpair failed and we were unable to recover it. 00:38:05.511 [2024-12-13 03:49:06.356189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.511 [2024-12-13 03:49:06.356202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.511 qpair failed and we were unable to recover it. 00:38:05.511 [2024-12-13 03:49:06.356288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.511 [2024-12-13 03:49:06.356300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.511 qpair failed and we were unable to recover it. 00:38:05.511 [2024-12-13 03:49:06.356375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.511 [2024-12-13 03:49:06.356387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.511 qpair failed and we were unable to recover it. 00:38:05.511 [2024-12-13 03:49:06.356526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.511 [2024-12-13 03:49:06.356539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.511 qpair failed and we were unable to recover it. 00:38:05.511 [2024-12-13 03:49:06.356720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.511 [2024-12-13 03:49:06.356733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.511 qpair failed and we were unable to recover it. 00:38:05.511 [2024-12-13 03:49:06.356816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.511 [2024-12-13 03:49:06.356828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.511 qpair failed and we were unable to recover it. 00:38:05.511 [2024-12-13 03:49:06.356925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.511 [2024-12-13 03:49:06.356938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.511 qpair failed and we were unable to recover it. 00:38:05.511 [2024-12-13 03:49:06.357033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.511 [2024-12-13 03:49:06.357046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.511 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.357127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.512 [2024-12-13 03:49:06.357139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.512 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.357275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.512 [2024-12-13 03:49:06.357289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.512 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.357440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.512 [2024-12-13 03:49:06.357453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.512 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.357598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.512 [2024-12-13 03:49:06.357611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.512 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.357693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.512 [2024-12-13 03:49:06.357705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.512 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.357853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.512 [2024-12-13 03:49:06.357866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.512 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.357950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.512 [2024-12-13 03:49:06.357963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.512 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.358113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.512 [2024-12-13 03:49:06.358126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.512 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.358207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.512 [2024-12-13 03:49:06.358219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.512 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.358294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.512 [2024-12-13 03:49:06.358307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.512 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.358439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.512 [2024-12-13 03:49:06.358452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.512 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.358679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.512 [2024-12-13 03:49:06.358693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.512 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.358866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.512 [2024-12-13 03:49:06.358879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.512 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.359008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.512 [2024-12-13 03:49:06.359045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.512 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.359279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.512 [2024-12-13 03:49:06.359322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.512 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.359596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.512 [2024-12-13 03:49:06.359634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.512 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.359785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.512 [2024-12-13 03:49:06.359798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.512 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.360067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.512 [2024-12-13 03:49:06.360110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.512 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.360257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.512 [2024-12-13 03:49:06.360298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.512 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.360590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.512 [2024-12-13 03:49:06.360642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.512 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.360815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.512 [2024-12-13 03:49:06.360829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.512 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.361011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.512 [2024-12-13 03:49:06.361053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.512 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.361306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.512 [2024-12-13 03:49:06.361349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.512 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.361483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.512 [2024-12-13 03:49:06.361523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.512 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.361828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.512 [2024-12-13 03:49:06.361867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.512 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.362107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.512 [2024-12-13 03:49:06.362149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.512 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.362411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.512 [2024-12-13 03:49:06.362452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.512 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.362643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.512 [2024-12-13 03:49:06.362690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.512 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.362887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.512 [2024-12-13 03:49:06.362901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.512 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.363073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.512 [2024-12-13 03:49:06.363116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.512 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.363375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.512 [2024-12-13 03:49:06.363417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.512 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.363641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.512 [2024-12-13 03:49:06.363682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.512 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.363967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.512 [2024-12-13 03:49:06.364010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.512 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.364323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.512 [2024-12-13 03:49:06.364365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.512 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.364577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.512 [2024-12-13 03:49:06.364617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.512 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.364815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.512 [2024-12-13 03:49:06.364856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.512 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.365027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.512 [2024-12-13 03:49:06.365041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.512 qpair failed and we were unable to recover it. 00:38:05.512 [2024-12-13 03:49:06.365141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.365154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.513 [2024-12-13 03:49:06.365236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.365248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.513 [2024-12-13 03:49:06.365382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.365395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.513 [2024-12-13 03:49:06.365579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.365595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.513 [2024-12-13 03:49:06.365818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.365835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.513 [2024-12-13 03:49:06.365903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.365915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.513 [2024-12-13 03:49:06.366015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.366032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.513 [2024-12-13 03:49:06.366185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.366199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.513 [2024-12-13 03:49:06.366357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.366370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.513 [2024-12-13 03:49:06.366534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.366547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.513 [2024-12-13 03:49:06.366687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.366700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.513 [2024-12-13 03:49:06.366799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.366812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.513 [2024-12-13 03:49:06.366906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.366923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.513 [2024-12-13 03:49:06.367069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.367083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.513 [2024-12-13 03:49:06.367233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.367246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.513 [2024-12-13 03:49:06.367345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.367359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.513 [2024-12-13 03:49:06.367495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.367508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.513 [2024-12-13 03:49:06.367614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.367627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.513 [2024-12-13 03:49:06.367694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.367706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.513 [2024-12-13 03:49:06.367913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.367929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.513 [2024-12-13 03:49:06.367997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.368009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.513 [2024-12-13 03:49:06.368089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.368101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.513 [2024-12-13 03:49:06.368178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.368189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.513 [2024-12-13 03:49:06.368369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.368381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.513 [2024-12-13 03:49:06.368548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.368563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.513 [2024-12-13 03:49:06.368659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.368672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.513 [2024-12-13 03:49:06.368743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.368756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.513 [2024-12-13 03:49:06.368921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.368935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.513 [2024-12-13 03:49:06.369003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.369015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.513 [2024-12-13 03:49:06.369109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.369121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.513 [2024-12-13 03:49:06.369268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.369284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.513 [2024-12-13 03:49:06.369354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.369366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.513 [2024-12-13 03:49:06.369527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.369541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.513 [2024-12-13 03:49:06.369674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.369688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.513 [2024-12-13 03:49:06.369822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.369841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.513 [2024-12-13 03:49:06.369911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.369927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.513 [2024-12-13 03:49:06.370021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.370034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.513 [2024-12-13 03:49:06.370105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.513 [2024-12-13 03:49:06.370116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.513 qpair failed and we were unable to recover it. 00:38:05.514 [2024-12-13 03:49:06.370373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.514 [2024-12-13 03:49:06.370386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.514 qpair failed and we were unable to recover it. 00:38:05.514 [2024-12-13 03:49:06.370518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.514 [2024-12-13 03:49:06.370532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.514 qpair failed and we were unable to recover it. 00:38:05.514 [2024-12-13 03:49:06.370709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.514 [2024-12-13 03:49:06.370722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.514 qpair failed and we were unable to recover it. 00:38:05.514 [2024-12-13 03:49:06.370798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.514 [2024-12-13 03:49:06.370811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.514 qpair failed and we were unable to recover it. 00:38:05.514 [2024-12-13 03:49:06.371037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.514 [2024-12-13 03:49:06.371051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.514 qpair failed and we were unable to recover it. 00:38:05.514 [2024-12-13 03:49:06.371283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.514 [2024-12-13 03:49:06.371296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.514 qpair failed and we were unable to recover it. 00:38:05.514 [2024-12-13 03:49:06.371394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.514 [2024-12-13 03:49:06.371407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.514 qpair failed and we were unable to recover it. 00:38:05.514 [2024-12-13 03:49:06.371505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.514 [2024-12-13 03:49:06.371518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.514 qpair failed and we were unable to recover it. 00:38:05.514 [2024-12-13 03:49:06.371691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.514 [2024-12-13 03:49:06.371704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.514 qpair failed and we were unable to recover it. 00:38:05.514 [2024-12-13 03:49:06.371849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.514 [2024-12-13 03:49:06.371862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.514 qpair failed and we were unable to recover it. 00:38:05.514 [2024-12-13 03:49:06.371936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.514 [2024-12-13 03:49:06.371948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.514 qpair failed and we were unable to recover it. 00:38:05.514 [2024-12-13 03:49:06.372099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.514 [2024-12-13 03:49:06.372113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.514 qpair failed and we were unable to recover it. 00:38:05.514 [2024-12-13 03:49:06.372335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.514 [2024-12-13 03:49:06.372349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.514 qpair failed and we were unable to recover it. 00:38:05.514 [2024-12-13 03:49:06.372429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.514 [2024-12-13 03:49:06.372441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.514 qpair failed and we were unable to recover it. 00:38:05.514 [2024-12-13 03:49:06.372525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.514 [2024-12-13 03:49:06.372541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.514 qpair failed and we were unable to recover it. 00:38:05.514 [2024-12-13 03:49:06.372758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.514 [2024-12-13 03:49:06.372772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.514 qpair failed and we were unable to recover it. 00:38:05.514 [2024-12-13 03:49:06.372868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.514 [2024-12-13 03:49:06.372881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.514 qpair failed and we were unable to recover it. 00:38:05.514 [2024-12-13 03:49:06.373085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.514 [2024-12-13 03:49:06.373098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.514 qpair failed and we were unable to recover it. 00:38:05.514 [2024-12-13 03:49:06.373322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.514 [2024-12-13 03:49:06.373336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.514 qpair failed and we were unable to recover it. 00:38:05.514 [2024-12-13 03:49:06.373472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.514 [2024-12-13 03:49:06.373485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.514 qpair failed and we were unable to recover it. 00:38:05.514 [2024-12-13 03:49:06.373646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.514 [2024-12-13 03:49:06.373660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.514 qpair failed and we were unable to recover it. 00:38:05.514 [2024-12-13 03:49:06.373766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.514 [2024-12-13 03:49:06.373779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.514 qpair failed and we were unable to recover it. 00:38:05.514 [2024-12-13 03:49:06.374006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.514 [2024-12-13 03:49:06.374020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.514 qpair failed and we were unable to recover it. 00:38:05.514 [2024-12-13 03:49:06.374171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.514 [2024-12-13 03:49:06.374185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.514 qpair failed and we were unable to recover it. 00:38:05.514 [2024-12-13 03:49:06.374258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.514 [2024-12-13 03:49:06.374270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.514 qpair failed and we were unable to recover it. 00:38:05.514 [2024-12-13 03:49:06.374344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.514 [2024-12-13 03:49:06.374357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.514 qpair failed and we were unable to recover it. 00:38:05.514 [2024-12-13 03:49:06.374511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.514 [2024-12-13 03:49:06.374525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.514 qpair failed and we were unable to recover it. 00:38:05.514 [2024-12-13 03:49:06.374719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.514 [2024-12-13 03:49:06.374732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.514 qpair failed and we were unable to recover it. 00:38:05.514 [2024-12-13 03:49:06.374823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.514 [2024-12-13 03:49:06.374836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.514 qpair failed and we were unable to recover it. 00:38:05.514 [2024-12-13 03:49:06.374979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.514 [2024-12-13 03:49:06.374993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.514 qpair failed and we were unable to recover it. 00:38:05.514 [2024-12-13 03:49:06.375058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.514 [2024-12-13 03:49:06.375080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.514 qpair failed and we were unable to recover it. 00:38:05.514 [2024-12-13 03:49:06.375167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.514 [2024-12-13 03:49:06.375180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.514 qpair failed and we were unable to recover it. 00:38:05.514 [2024-12-13 03:49:06.375262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.514 [2024-12-13 03:49:06.375277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.514 qpair failed and we were unable to recover it. 00:38:05.514 [2024-12-13 03:49:06.375411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.514 [2024-12-13 03:49:06.375425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.514 qpair failed and we were unable to recover it. 00:38:05.514 [2024-12-13 03:49:06.375594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.514 [2024-12-13 03:49:06.375607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.514 qpair failed and we were unable to recover it. 00:38:05.514 [2024-12-13 03:49:06.375708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.514 [2024-12-13 03:49:06.375722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.514 qpair failed and we were unable to recover it. 00:38:05.514 [2024-12-13 03:49:06.375806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.515 [2024-12-13 03:49:06.375819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.515 qpair failed and we were unable to recover it. 00:38:05.515 [2024-12-13 03:49:06.375909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.515 [2024-12-13 03:49:06.375926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.515 qpair failed and we were unable to recover it. 00:38:05.515 [2024-12-13 03:49:06.376076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.515 [2024-12-13 03:49:06.376089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.515 qpair failed and we were unable to recover it. 00:38:05.515 [2024-12-13 03:49:06.376223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.515 [2024-12-13 03:49:06.376236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.515 qpair failed and we were unable to recover it. 00:38:05.515 [2024-12-13 03:49:06.376384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.515 [2024-12-13 03:49:06.376397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.515 qpair failed and we were unable to recover it. 00:38:05.515 [2024-12-13 03:49:06.376623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.515 [2024-12-13 03:49:06.376637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.515 qpair failed and we were unable to recover it. 00:38:05.515 [2024-12-13 03:49:06.376706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.515 [2024-12-13 03:49:06.376718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.515 qpair failed and we were unable to recover it. 00:38:05.515 [2024-12-13 03:49:06.376889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.515 [2024-12-13 03:49:06.376939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.515 qpair failed and we were unable to recover it. 00:38:05.515 [2024-12-13 03:49:06.377079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.515 [2024-12-13 03:49:06.377120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.515 qpair failed and we were unable to recover it. 00:38:05.515 [2024-12-13 03:49:06.377263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.515 [2024-12-13 03:49:06.377305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.515 qpair failed and we were unable to recover it. 00:38:05.515 [2024-12-13 03:49:06.377448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.515 [2024-12-13 03:49:06.377491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.515 qpair failed and we were unable to recover it. 00:38:05.515 [2024-12-13 03:49:06.377692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.515 [2024-12-13 03:49:06.377733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.515 qpair failed and we were unable to recover it. 00:38:05.515 [2024-12-13 03:49:06.377968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.515 [2024-12-13 03:49:06.378009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.515 qpair failed and we were unable to recover it. 00:38:05.515 [2024-12-13 03:49:06.378207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.515 [2024-12-13 03:49:06.378220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.515 qpair failed and we were unable to recover it. 00:38:05.515 [2024-12-13 03:49:06.378424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.515 [2024-12-13 03:49:06.378438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.515 qpair failed and we were unable to recover it. 00:38:05.515 [2024-12-13 03:49:06.378568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.515 [2024-12-13 03:49:06.378582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.515 qpair failed and we were unable to recover it. 00:38:05.515 [2024-12-13 03:49:06.378833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.515 [2024-12-13 03:49:06.378846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.515 qpair failed and we were unable to recover it. 00:38:05.515 [2024-12-13 03:49:06.379019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.515 [2024-12-13 03:49:06.379033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.515 qpair failed and we were unable to recover it. 00:38:05.515 [2024-12-13 03:49:06.379204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.515 [2024-12-13 03:49:06.379217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.515 qpair failed and we were unable to recover it. 00:38:05.515 [2024-12-13 03:49:06.379303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.515 [2024-12-13 03:49:06.379316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.515 qpair failed and we were unable to recover it. 00:38:05.515 [2024-12-13 03:49:06.379511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.515 [2024-12-13 03:49:06.379551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.515 qpair failed and we were unable to recover it. 00:38:05.515 [2024-12-13 03:49:06.379828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.515 [2024-12-13 03:49:06.379870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.515 qpair failed and we were unable to recover it. 00:38:05.515 [2024-12-13 03:49:06.380023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.515 [2024-12-13 03:49:06.380058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.515 qpair failed and we were unable to recover it. 00:38:05.515 [2024-12-13 03:49:06.380162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.515 [2024-12-13 03:49:06.380174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.515 qpair failed and we were unable to recover it. 00:38:05.515 [2024-12-13 03:49:06.380256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.515 [2024-12-13 03:49:06.380268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.515 qpair failed and we were unable to recover it. 00:38:05.515 [2024-12-13 03:49:06.380470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.515 [2024-12-13 03:49:06.380484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.515 qpair failed and we were unable to recover it. 00:38:05.515 [2024-12-13 03:49:06.380552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.515 [2024-12-13 03:49:06.380564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.515 qpair failed and we were unable to recover it. 00:38:05.515 [2024-12-13 03:49:06.380695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.515 [2024-12-13 03:49:06.380708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.515 qpair failed and we were unable to recover it. 00:38:05.515 [2024-12-13 03:49:06.380855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.515 [2024-12-13 03:49:06.380868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.515 qpair failed and we were unable to recover it. 00:38:05.515 [2024-12-13 03:49:06.381072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.515 [2024-12-13 03:49:06.381085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.515 qpair failed and we were unable to recover it. 00:38:05.515 [2024-12-13 03:49:06.381236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.515 [2024-12-13 03:49:06.381249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.515 qpair failed and we were unable to recover it. 00:38:05.515 [2024-12-13 03:49:06.381418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.515 [2024-12-13 03:49:06.381432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.515 qpair failed and we were unable to recover it. 00:38:05.515 [2024-12-13 03:49:06.381516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.515 [2024-12-13 03:49:06.381529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.515 qpair failed and we were unable to recover it. 00:38:05.515 [2024-12-13 03:49:06.381606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.516 [2024-12-13 03:49:06.381619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.516 qpair failed and we were unable to recover it. 00:38:05.516 [2024-12-13 03:49:06.381817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.516 [2024-12-13 03:49:06.381830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.516 qpair failed and we were unable to recover it. 00:38:05.516 [2024-12-13 03:49:06.381937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.516 [2024-12-13 03:49:06.381951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.516 qpair failed and we were unable to recover it. 00:38:05.516 [2024-12-13 03:49:06.382172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.516 [2024-12-13 03:49:06.382188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.516 qpair failed and we were unable to recover it. 00:38:05.516 [2024-12-13 03:49:06.382272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.516 [2024-12-13 03:49:06.382284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.516 qpair failed and we were unable to recover it. 00:38:05.516 [2024-12-13 03:49:06.382428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.516 [2024-12-13 03:49:06.382441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.516 qpair failed and we were unable to recover it. 00:38:05.516 [2024-12-13 03:49:06.382520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.516 [2024-12-13 03:49:06.382532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.516 qpair failed and we were unable to recover it. 00:38:05.516 [2024-12-13 03:49:06.382674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.516 [2024-12-13 03:49:06.382687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.516 qpair failed and we were unable to recover it. 00:38:05.516 [2024-12-13 03:49:06.382755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.516 [2024-12-13 03:49:06.382767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.516 qpair failed and we were unable to recover it. 00:38:05.516 [2024-12-13 03:49:06.382927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.516 [2024-12-13 03:49:06.382941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.516 qpair failed and we were unable to recover it. 00:38:05.516 [2024-12-13 03:49:06.383166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.516 [2024-12-13 03:49:06.383180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.516 qpair failed and we were unable to recover it. 00:38:05.516 [2024-12-13 03:49:06.383408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.516 [2024-12-13 03:49:06.383422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.516 qpair failed and we were unable to recover it. 00:38:05.516 [2024-12-13 03:49:06.383574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.516 [2024-12-13 03:49:06.383587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.516 qpair failed and we were unable to recover it. 00:38:05.516 [2024-12-13 03:49:06.383675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.516 [2024-12-13 03:49:06.383689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.516 qpair failed and we were unable to recover it. 00:38:05.516 [2024-12-13 03:49:06.383818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.516 [2024-12-13 03:49:06.383831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.516 qpair failed and we were unable to recover it. 00:38:05.516 [2024-12-13 03:49:06.384066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.516 [2024-12-13 03:49:06.384109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.516 qpair failed and we were unable to recover it. 00:38:05.516 [2024-12-13 03:49:06.384342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.516 [2024-12-13 03:49:06.384383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.516 qpair failed and we were unable to recover it. 00:38:05.516 [2024-12-13 03:49:06.384544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.516 [2024-12-13 03:49:06.384585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.516 qpair failed and we were unable to recover it. 00:38:05.516 [2024-12-13 03:49:06.384846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.516 [2024-12-13 03:49:06.384889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.516 qpair failed and we were unable to recover it. 00:38:05.516 [2024-12-13 03:49:06.385053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.516 [2024-12-13 03:49:06.385095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.516 qpair failed and we were unable to recover it. 00:38:05.516 [2024-12-13 03:49:06.385294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.516 [2024-12-13 03:49:06.385336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.516 qpair failed and we were unable to recover it. 00:38:05.516 [2024-12-13 03:49:06.385590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.516 [2024-12-13 03:49:06.385631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.516 qpair failed and we were unable to recover it. 00:38:05.516 [2024-12-13 03:49:06.385886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.516 [2024-12-13 03:49:06.385936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.516 qpair failed and we were unable to recover it. 00:38:05.516 [2024-12-13 03:49:06.386113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.516 [2024-12-13 03:49:06.386126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.516 qpair failed and we were unable to recover it. 00:38:05.516 [2024-12-13 03:49:06.386287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.516 [2024-12-13 03:49:06.386300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.516 qpair failed and we were unable to recover it. 00:38:05.516 [2024-12-13 03:49:06.386449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.516 [2024-12-13 03:49:06.386466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.516 qpair failed and we were unable to recover it. 00:38:05.516 [2024-12-13 03:49:06.386541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.516 [2024-12-13 03:49:06.386554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.516 qpair failed and we were unable to recover it. 00:38:05.516 [2024-12-13 03:49:06.386655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.516 [2024-12-13 03:49:06.386668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.516 qpair failed and we were unable to recover it. 00:38:05.516 [2024-12-13 03:49:06.386765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.516 [2024-12-13 03:49:06.386779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.516 qpair failed and we were unable to recover it. 00:38:05.516 [2024-12-13 03:49:06.386928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.516 [2024-12-13 03:49:06.386941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.516 qpair failed and we were unable to recover it. 00:38:05.516 [2024-12-13 03:49:06.387018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.516 [2024-12-13 03:49:06.387030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.516 qpair failed and we were unable to recover it. 00:38:05.516 [2024-12-13 03:49:06.387160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.516 [2024-12-13 03:49:06.387174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.516 qpair failed and we were unable to recover it. 00:38:05.516 [2024-12-13 03:49:06.387249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.516 [2024-12-13 03:49:06.387261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.516 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.387333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.517 [2024-12-13 03:49:06.387346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.517 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.387496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.517 [2024-12-13 03:49:06.387509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.517 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.387646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.517 [2024-12-13 03:49:06.387659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.517 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.387818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.517 [2024-12-13 03:49:06.387831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.517 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.387898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.517 [2024-12-13 03:49:06.387910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.517 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.387992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.517 [2024-12-13 03:49:06.388005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.517 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.388091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.517 [2024-12-13 03:49:06.388104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.517 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.388257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.517 [2024-12-13 03:49:06.388269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.517 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.388346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.517 [2024-12-13 03:49:06.388359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.517 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.388445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.517 [2024-12-13 03:49:06.388457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.517 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.388685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.517 [2024-12-13 03:49:06.388739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.517 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.388961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.517 [2024-12-13 03:49:06.389004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.517 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.389223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.517 [2024-12-13 03:49:06.389267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.517 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.389492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.517 [2024-12-13 03:49:06.389533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.517 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.389761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.517 [2024-12-13 03:49:06.389802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.517 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.390085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.517 [2024-12-13 03:49:06.390127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.517 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.390333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.517 [2024-12-13 03:49:06.390373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.517 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.390648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.517 [2024-12-13 03:49:06.390688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.517 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.390861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.517 [2024-12-13 03:49:06.390874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.517 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.391037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.517 [2024-12-13 03:49:06.391070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.517 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.391281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.517 [2024-12-13 03:49:06.391322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.517 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.391576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.517 [2024-12-13 03:49:06.391618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.517 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.391818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.517 [2024-12-13 03:49:06.391831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.517 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.391932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.517 [2024-12-13 03:49:06.391944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.517 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.392010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.517 [2024-12-13 03:49:06.392022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.517 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.392120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.517 [2024-12-13 03:49:06.392133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.517 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.392198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.517 [2024-12-13 03:49:06.392210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.517 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.392354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.517 [2024-12-13 03:49:06.392367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.517 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.392452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.517 [2024-12-13 03:49:06.392464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.517 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.392569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.517 [2024-12-13 03:49:06.392581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.517 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.392718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.517 [2024-12-13 03:49:06.392732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.517 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.392804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.517 [2024-12-13 03:49:06.392817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.517 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.392953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.517 [2024-12-13 03:49:06.392970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.517 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.393071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.517 [2024-12-13 03:49:06.393083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.517 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.393218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.517 [2024-12-13 03:49:06.393232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.517 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.393435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.517 [2024-12-13 03:49:06.393448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.517 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.393590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.517 [2024-12-13 03:49:06.393604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.517 qpair failed and we were unable to recover it. 00:38:05.517 [2024-12-13 03:49:06.393871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.393884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.393995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.394009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.394155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.394168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.394278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.394293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.394521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.394535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.394688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.394702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.394867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.394882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.395031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.395044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.395140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.395152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.395218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.395231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.395364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.395377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.395452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.395464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.395596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.395610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.395696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.395710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.395797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.395809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.396013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.396027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.396174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.396187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.396270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.396284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.396351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.396364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.396516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.396538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.396608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.396622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.396699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.396711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.396848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.396861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.397010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.397024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.397273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.397286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.397395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.397407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.397577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.397591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.397773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.397815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.398012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.398056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.398285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.398326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.398546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.398588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.398789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.398830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.399030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.399044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.399211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.399224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.399372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.399386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.399527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.399541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.399637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.399649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.399850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.399864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.400018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.400032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.518 [2024-12-13 03:49:06.400181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.518 [2024-12-13 03:49:06.400195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.518 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.400295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.400307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.400390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.400403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.400487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.400500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.400566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.400579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.400648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.400661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.400795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.400807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.400884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.400897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.400985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.400998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.401093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.401106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.401185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.401197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.401264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.401277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.401423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.401436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.401501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.401513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.401595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.401608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.401829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.401843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.401929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.401942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.402076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.402089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.402178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.402191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.402327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.402340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.402478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.402491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.402577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.402591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.402735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.402749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.402830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.402843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.402926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.402939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.403096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.403110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.403272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.403286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.403501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.403514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.403684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.403725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.403940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.403983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.404176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.404217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.404355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.404396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.404603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.404644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.404866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.404908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.405164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.405207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.405351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.405395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.405620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.405659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.405832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.405846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.519 [2024-12-13 03:49:06.406018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.519 [2024-12-13 03:49:06.406060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.519 qpair failed and we were unable to recover it. 00:38:05.520 [2024-12-13 03:49:06.406186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.520 [2024-12-13 03:49:06.406227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.520 qpair failed and we were unable to recover it. 00:38:05.520 [2024-12-13 03:49:06.406510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.520 [2024-12-13 03:49:06.406552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.520 qpair failed and we were unable to recover it. 00:38:05.520 [2024-12-13 03:49:06.406832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.520 [2024-12-13 03:49:06.406878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.520 qpair failed and we were unable to recover it. 00:38:05.520 [2024-12-13 03:49:06.407104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.520 [2024-12-13 03:49:06.407147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.520 qpair failed and we were unable to recover it. 00:38:05.520 [2024-12-13 03:49:06.407440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.520 [2024-12-13 03:49:06.407483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.520 qpair failed and we were unable to recover it. 00:38:05.520 [2024-12-13 03:49:06.407634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.520 [2024-12-13 03:49:06.407687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.520 qpair failed and we were unable to recover it. 00:38:05.520 [2024-12-13 03:49:06.407908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.520 [2024-12-13 03:49:06.407928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.520 qpair failed and we were unable to recover it. 00:38:05.520 [2024-12-13 03:49:06.408154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.520 [2024-12-13 03:49:06.408168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.520 qpair failed and we were unable to recover it. 00:38:05.520 [2024-12-13 03:49:06.408308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.520 [2024-12-13 03:49:06.408321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.520 qpair failed and we were unable to recover it. 00:38:05.520 [2024-12-13 03:49:06.408555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.520 [2024-12-13 03:49:06.408569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.520 qpair failed and we were unable to recover it. 00:38:05.520 [2024-12-13 03:49:06.408724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.520 [2024-12-13 03:49:06.408738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.520 qpair failed and we were unable to recover it. 00:38:05.520 [2024-12-13 03:49:06.408832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.520 [2024-12-13 03:49:06.408845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.520 qpair failed and we were unable to recover it. 00:38:05.520 [2024-12-13 03:49:06.408951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.520 [2024-12-13 03:49:06.408964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.520 qpair failed and we were unable to recover it. 00:38:05.520 [2024-12-13 03:49:06.409127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.520 [2024-12-13 03:49:06.409140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.520 qpair failed and we were unable to recover it. 00:38:05.520 [2024-12-13 03:49:06.409232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.520 [2024-12-13 03:49:06.409246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.520 qpair failed and we were unable to recover it. 00:38:05.520 [2024-12-13 03:49:06.409408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.520 [2024-12-13 03:49:06.409420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.520 qpair failed and we were unable to recover it. 00:38:05.520 [2024-12-13 03:49:06.409643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.520 [2024-12-13 03:49:06.409656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.520 qpair failed and we were unable to recover it. 00:38:05.520 [2024-12-13 03:49:06.409798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.520 [2024-12-13 03:49:06.409811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.520 qpair failed and we were unable to recover it. 00:38:05.520 [2024-12-13 03:49:06.409910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.520 [2024-12-13 03:49:06.409930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.520 qpair failed and we were unable to recover it. 00:38:05.520 [2024-12-13 03:49:06.410000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.520 [2024-12-13 03:49:06.410013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.520 qpair failed and we were unable to recover it. 00:38:05.520 [2024-12-13 03:49:06.410151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.520 [2024-12-13 03:49:06.410164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.520 qpair failed and we were unable to recover it. 00:38:05.520 [2024-12-13 03:49:06.410320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.520 [2024-12-13 03:49:06.410334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.520 qpair failed and we were unable to recover it. 00:38:05.520 [2024-12-13 03:49:06.410511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.520 [2024-12-13 03:49:06.410524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.520 qpair failed and we were unable to recover it. 00:38:05.520 [2024-12-13 03:49:06.410621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.520 [2024-12-13 03:49:06.410634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.520 qpair failed and we were unable to recover it. 00:38:05.520 [2024-12-13 03:49:06.410859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.520 [2024-12-13 03:49:06.410872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.520 qpair failed and we were unable to recover it. 00:38:05.520 [2024-12-13 03:49:06.410962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.520 [2024-12-13 03:49:06.410977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.520 qpair failed and we were unable to recover it. 00:38:05.520 [2024-12-13 03:49:06.411124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.520 [2024-12-13 03:49:06.411137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.520 qpair failed and we were unable to recover it. 00:38:05.520 [2024-12-13 03:49:06.411285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.520 [2024-12-13 03:49:06.411299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.520 qpair failed and we were unable to recover it. 00:38:05.520 [2024-12-13 03:49:06.411434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.520 [2024-12-13 03:49:06.411448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.520 qpair failed and we were unable to recover it. 00:38:05.520 [2024-12-13 03:49:06.411618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.520 [2024-12-13 03:49:06.411631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.520 qpair failed and we were unable to recover it. 00:38:05.520 [2024-12-13 03:49:06.411798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.520 [2024-12-13 03:49:06.411811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.520 qpair failed and we were unable to recover it. 00:38:05.520 [2024-12-13 03:49:06.411970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.520 [2024-12-13 03:49:06.411985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.520 qpair failed and we were unable to recover it. 00:38:05.520 [2024-12-13 03:49:06.412168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.520 [2024-12-13 03:49:06.412181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.520 qpair failed and we were unable to recover it. 00:38:05.520 [2024-12-13 03:49:06.412318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.520 [2024-12-13 03:49:06.412333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.520 qpair failed and we were unable to recover it. 00:38:05.520 [2024-12-13 03:49:06.412539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.520 [2024-12-13 03:49:06.412552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.520 qpair failed and we were unable to recover it. 00:38:05.520 [2024-12-13 03:49:06.412697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.521 [2024-12-13 03:49:06.412710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.521 qpair failed and we were unable to recover it. 00:38:05.521 [2024-12-13 03:49:06.412851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.521 [2024-12-13 03:49:06.412865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.521 qpair failed and we were unable to recover it. 00:38:05.521 [2024-12-13 03:49:06.412965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.521 [2024-12-13 03:49:06.412979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.521 qpair failed and we were unable to recover it. 00:38:05.521 [2024-12-13 03:49:06.413114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.521 [2024-12-13 03:49:06.413128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.521 qpair failed and we were unable to recover it. 00:38:05.521 [2024-12-13 03:49:06.413317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.521 [2024-12-13 03:49:06.413330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.521 qpair failed and we were unable to recover it. 00:38:05.521 [2024-12-13 03:49:06.413397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.521 [2024-12-13 03:49:06.413411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.521 qpair failed and we were unable to recover it. 00:38:05.521 [2024-12-13 03:49:06.413486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.521 [2024-12-13 03:49:06.413499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.521 qpair failed and we were unable to recover it. 00:38:05.521 [2024-12-13 03:49:06.413572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.521 [2024-12-13 03:49:06.413586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.521 qpair failed and we were unable to recover it. 00:38:05.521 [2024-12-13 03:49:06.413740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.521 [2024-12-13 03:49:06.413754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.521 qpair failed and we were unable to recover it. 00:38:05.521 [2024-12-13 03:49:06.413926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.521 [2024-12-13 03:49:06.413940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.521 qpair failed and we were unable to recover it. 00:38:05.521 [2024-12-13 03:49:06.414125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.521 [2024-12-13 03:49:06.414167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.521 qpair failed and we were unable to recover it. 00:38:05.521 [2024-12-13 03:49:06.414323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.521 [2024-12-13 03:49:06.414368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.521 qpair failed and we were unable to recover it. 00:38:05.521 [2024-12-13 03:49:06.414607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.521 [2024-12-13 03:49:06.414648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.521 qpair failed and we were unable to recover it. 00:38:05.521 [2024-12-13 03:49:06.414793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.521 [2024-12-13 03:49:06.414834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.521 qpair failed and we were unable to recover it. 00:38:05.521 [2024-12-13 03:49:06.415037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.521 [2024-12-13 03:49:06.415051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.521 qpair failed and we were unable to recover it. 00:38:05.521 [2024-12-13 03:49:06.415206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.521 [2024-12-13 03:49:06.415220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.521 qpair failed and we were unable to recover it. 00:38:05.521 [2024-12-13 03:49:06.415296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.521 [2024-12-13 03:49:06.415312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.521 qpair failed and we were unable to recover it. 00:38:05.521 [2024-12-13 03:49:06.415460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.521 [2024-12-13 03:49:06.415473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.521 qpair failed and we were unable to recover it. 00:38:05.521 [2024-12-13 03:49:06.415566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.521 [2024-12-13 03:49:06.415581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.521 qpair failed and we were unable to recover it. 00:38:05.521 [2024-12-13 03:49:06.415723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.521 [2024-12-13 03:49:06.415737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.521 qpair failed and we were unable to recover it. 00:38:05.521 [2024-12-13 03:49:06.415871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.521 [2024-12-13 03:49:06.415885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.521 qpair failed and we were unable to recover it. 00:38:05.521 [2024-12-13 03:49:06.415955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.521 [2024-12-13 03:49:06.415968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.521 qpair failed and we were unable to recover it. 00:38:05.521 [2024-12-13 03:49:06.416138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.521 [2024-12-13 03:49:06.416182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.521 qpair failed and we were unable to recover it. 00:38:05.521 [2024-12-13 03:49:06.416325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.521 [2024-12-13 03:49:06.416367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.521 qpair failed and we were unable to recover it. 00:38:05.521 [2024-12-13 03:49:06.416584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.521 [2024-12-13 03:49:06.416624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.521 qpair failed and we were unable to recover it. 00:38:05.521 [2024-12-13 03:49:06.416864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.521 [2024-12-13 03:49:06.416905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.521 qpair failed and we were unable to recover it. 00:38:05.521 [2024-12-13 03:49:06.417089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.521 [2024-12-13 03:49:06.417102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.521 qpair failed and we were unable to recover it. 00:38:05.521 [2024-12-13 03:49:06.417184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.522 [2024-12-13 03:49:06.417197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.522 qpair failed and we were unable to recover it. 00:38:05.522 [2024-12-13 03:49:06.417288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.522 [2024-12-13 03:49:06.417303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.522 qpair failed and we were unable to recover it. 00:38:05.522 [2024-12-13 03:49:06.417400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.522 [2024-12-13 03:49:06.417414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.522 qpair failed and we were unable to recover it. 00:38:05.522 [2024-12-13 03:49:06.417566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.522 [2024-12-13 03:49:06.417579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.522 qpair failed and we were unable to recover it. 00:38:05.522 [2024-12-13 03:49:06.417757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.522 [2024-12-13 03:49:06.417799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.522 qpair failed and we were unable to recover it. 00:38:05.522 [2024-12-13 03:49:06.417940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.522 [2024-12-13 03:49:06.417984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.522 qpair failed and we were unable to recover it. 00:38:05.522 [2024-12-13 03:49:06.418177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.522 [2024-12-13 03:49:06.418221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.522 qpair failed and we were unable to recover it. 00:38:05.522 [2024-12-13 03:49:06.418372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.522 [2024-12-13 03:49:06.418435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.522 qpair failed and we were unable to recover it. 00:38:05.522 [2024-12-13 03:49:06.418637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.522 [2024-12-13 03:49:06.418676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.522 qpair failed and we were unable to recover it. 00:38:05.522 [2024-12-13 03:49:06.418803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.522 [2024-12-13 03:49:06.418844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.522 qpair failed and we were unable to recover it. 00:38:05.522 [2024-12-13 03:49:06.419049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.522 [2024-12-13 03:49:06.419083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.522 qpair failed and we were unable to recover it. 00:38:05.522 [2024-12-13 03:49:06.419239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.522 [2024-12-13 03:49:06.419253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.522 qpair failed and we were unable to recover it. 00:38:05.522 [2024-12-13 03:49:06.419354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.522 [2024-12-13 03:49:06.419368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.522 qpair failed and we were unable to recover it. 00:38:05.522 [2024-12-13 03:49:06.419517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.522 [2024-12-13 03:49:06.419531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.522 qpair failed and we were unable to recover it. 00:38:05.522 [2024-12-13 03:49:06.419676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.522 [2024-12-13 03:49:06.419706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.522 qpair failed and we were unable to recover it. 00:38:05.522 [2024-12-13 03:49:06.419867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.522 [2024-12-13 03:49:06.419908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.522 qpair failed and we were unable to recover it. 00:38:05.522 [2024-12-13 03:49:06.420129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.522 [2024-12-13 03:49:06.420173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.522 qpair failed and we were unable to recover it. 00:38:05.522 [2024-12-13 03:49:06.420432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.522 [2024-12-13 03:49:06.420474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.522 qpair failed and we were unable to recover it. 00:38:05.522 [2024-12-13 03:49:06.420757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.522 [2024-12-13 03:49:06.420800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.522 qpair failed and we were unable to recover it. 00:38:05.522 [2024-12-13 03:49:06.420999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.522 [2024-12-13 03:49:06.421012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.522 qpair failed and we were unable to recover it. 00:38:05.522 [2024-12-13 03:49:06.421103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.522 [2024-12-13 03:49:06.421119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.522 qpair failed and we were unable to recover it. 00:38:05.522 [2024-12-13 03:49:06.421215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.522 [2024-12-13 03:49:06.421229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.522 qpair failed and we were unable to recover it. 00:38:05.522 [2024-12-13 03:49:06.421310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.522 [2024-12-13 03:49:06.421323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.522 qpair failed and we were unable to recover it. 00:38:05.522 [2024-12-13 03:49:06.421460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.522 [2024-12-13 03:49:06.421473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.522 qpair failed and we were unable to recover it. 00:38:05.522 [2024-12-13 03:49:06.421535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.522 [2024-12-13 03:49:06.421549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.522 qpair failed and we were unable to recover it. 00:38:05.522 [2024-12-13 03:49:06.421631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.522 [2024-12-13 03:49:06.421643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.522 qpair failed and we were unable to recover it. 00:38:05.522 [2024-12-13 03:49:06.421739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.522 [2024-12-13 03:49:06.421755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.522 qpair failed and we were unable to recover it. 00:38:05.522 [2024-12-13 03:49:06.421958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.522 [2024-12-13 03:49:06.421971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.522 qpair failed and we were unable to recover it. 00:38:05.522 [2024-12-13 03:49:06.422069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.522 [2024-12-13 03:49:06.422082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.522 qpair failed and we were unable to recover it. 00:38:05.522 [2024-12-13 03:49:06.422163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.522 [2024-12-13 03:49:06.422175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.522 qpair failed and we were unable to recover it. 00:38:05.522 [2024-12-13 03:49:06.422380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.522 [2024-12-13 03:49:06.422393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.522 qpair failed and we were unable to recover it. 00:38:05.522 [2024-12-13 03:49:06.422540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.522 [2024-12-13 03:49:06.422554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.522 qpair failed and we were unable to recover it. 00:38:05.522 [2024-12-13 03:49:06.422632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.522 [2024-12-13 03:49:06.422644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.522 qpair failed and we were unable to recover it. 00:38:05.522 [2024-12-13 03:49:06.422721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.522 [2024-12-13 03:49:06.422735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.522 qpair failed and we were unable to recover it. 00:38:05.522 [2024-12-13 03:49:06.422836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.522 [2024-12-13 03:49:06.422849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.522 qpair failed and we were unable to recover it. 00:38:05.522 [2024-12-13 03:49:06.423089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.522 [2024-12-13 03:49:06.423104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.522 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.423195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.423208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.423349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.423363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.423509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.423522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.423671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.423685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.423783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.423796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.423889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.423904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.424051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.424066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.424131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.424144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.424344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.424357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.424430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.424442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.424525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.424538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.424637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.424651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.424730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.424743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.424903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.424921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.425010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.425023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.425181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.425224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.425346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.425388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.425531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.425572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.425714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.425755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.425895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.425908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.426129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.426171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.426305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.426348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.426487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.426528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.426789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.426831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.427025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.427075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.427280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.427322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.427525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.427567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.427794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.427836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.428019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.428034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.428265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.428308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.428526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.428567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.428797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.428838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.429113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.429127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.429273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.429287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.429449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.429465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.429682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.429737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.429940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.429971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.430085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.523 [2024-12-13 03:49:06.430098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.523 qpair failed and we were unable to recover it. 00:38:05.523 [2024-12-13 03:49:06.430258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.430272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.524 [2024-12-13 03:49:06.430359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.430372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.524 [2024-12-13 03:49:06.430452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.430464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.524 [2024-12-13 03:49:06.430556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.430569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.524 [2024-12-13 03:49:06.430641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.430654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.524 [2024-12-13 03:49:06.430757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.430771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.524 [2024-12-13 03:49:06.430910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.430929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.524 [2024-12-13 03:49:06.431024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.431037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.524 [2024-12-13 03:49:06.431171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.431186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.524 [2024-12-13 03:49:06.431272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.431284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.524 [2024-12-13 03:49:06.431365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.431382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.524 [2024-12-13 03:49:06.431485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.431498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.524 [2024-12-13 03:49:06.431685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.431700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.524 [2024-12-13 03:49:06.431846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.431859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.524 [2024-12-13 03:49:06.431944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.431957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.524 [2024-12-13 03:49:06.432110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.432124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.524 [2024-12-13 03:49:06.432227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.432241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.524 [2024-12-13 03:49:06.432347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.432361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.524 [2024-12-13 03:49:06.432503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.432516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.524 [2024-12-13 03:49:06.432678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.432691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.524 [2024-12-13 03:49:06.432845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.432859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.524 [2024-12-13 03:49:06.433011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.433041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.524 [2024-12-13 03:49:06.433237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.433280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.524 [2024-12-13 03:49:06.433474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.433515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.524 [2024-12-13 03:49:06.433806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.433850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.524 [2024-12-13 03:49:06.434016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.434031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.524 [2024-12-13 03:49:06.434223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.434280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.524 [2024-12-13 03:49:06.434506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.434568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.524 [2024-12-13 03:49:06.434786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.434827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.524 [2024-12-13 03:49:06.434989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.435034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.524 [2024-12-13 03:49:06.435229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.435243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.524 [2024-12-13 03:49:06.435400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.435456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.524 [2024-12-13 03:49:06.435697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.435762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.524 [2024-12-13 03:49:06.436064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.436108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.524 [2024-12-13 03:49:06.436310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.436355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.524 [2024-12-13 03:49:06.436641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.436684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.524 [2024-12-13 03:49:06.436884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.436937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.524 [2024-12-13 03:49:06.437117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.524 [2024-12-13 03:49:06.437178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.524 qpair failed and we were unable to recover it. 00:38:05.525 [2024-12-13 03:49:06.437265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.525 [2024-12-13 03:49:06.437279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.525 qpair failed and we were unable to recover it. 00:38:05.525 [2024-12-13 03:49:06.437365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.525 [2024-12-13 03:49:06.437378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.525 qpair failed and we were unable to recover it. 00:38:05.525 [2024-12-13 03:49:06.437613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.525 [2024-12-13 03:49:06.437627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.525 qpair failed and we were unable to recover it. 00:38:05.525 [2024-12-13 03:49:06.437736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.525 [2024-12-13 03:49:06.437750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.525 qpair failed and we were unable to recover it. 00:38:05.525 [2024-12-13 03:49:06.437990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.525 [2024-12-13 03:49:06.438004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.525 qpair failed and we were unable to recover it. 00:38:05.525 [2024-12-13 03:49:06.438145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.525 [2024-12-13 03:49:06.438158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.525 qpair failed and we were unable to recover it. 00:38:05.525 [2024-12-13 03:49:06.438363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.525 [2024-12-13 03:49:06.438377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.525 qpair failed and we were unable to recover it. 00:38:05.525 [2024-12-13 03:49:06.438544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.525 [2024-12-13 03:49:06.438558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.525 qpair failed and we were unable to recover it. 00:38:05.525 [2024-12-13 03:49:06.438708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.525 [2024-12-13 03:49:06.438723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.525 qpair failed and we were unable to recover it. 00:38:05.525 [2024-12-13 03:49:06.438808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.525 [2024-12-13 03:49:06.438820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.525 qpair failed and we were unable to recover it. 00:38:05.525 [2024-12-13 03:49:06.438926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.525 [2024-12-13 03:49:06.438940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.525 qpair failed and we were unable to recover it. 00:38:05.525 [2024-12-13 03:49:06.439023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.525 [2024-12-13 03:49:06.439037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.525 qpair failed and we were unable to recover it. 00:38:05.525 [2024-12-13 03:49:06.439258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.525 [2024-12-13 03:49:06.439273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.525 qpair failed and we were unable to recover it. 00:38:05.525 [2024-12-13 03:49:06.439449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.525 [2024-12-13 03:49:06.439464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.525 qpair failed and we were unable to recover it. 00:38:05.525 [2024-12-13 03:49:06.439605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.525 [2024-12-13 03:49:06.439635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.525 qpair failed and we were unable to recover it. 00:38:05.525 [2024-12-13 03:49:06.439790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.525 [2024-12-13 03:49:06.439834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.525 qpair failed and we were unable to recover it. 00:38:05.525 [2024-12-13 03:49:06.440167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.525 [2024-12-13 03:49:06.440213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.525 qpair failed and we were unable to recover it. 00:38:05.525 [2024-12-13 03:49:06.440310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.525 [2024-12-13 03:49:06.440330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.525 qpair failed and we were unable to recover it. 00:38:05.525 [2024-12-13 03:49:06.440510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.525 [2024-12-13 03:49:06.440536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.525 qpair failed and we were unable to recover it. 00:38:05.525 [2024-12-13 03:49:06.440688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.525 [2024-12-13 03:49:06.440702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.525 qpair failed and we were unable to recover it. 00:38:05.525 [2024-12-13 03:49:06.440962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.525 [2024-12-13 03:49:06.440976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.525 qpair failed and we were unable to recover it. 00:38:05.525 [2024-12-13 03:49:06.441204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.525 [2024-12-13 03:49:06.441218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.525 qpair failed and we were unable to recover it. 00:38:05.525 [2024-12-13 03:49:06.441305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.525 [2024-12-13 03:49:06.441318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.525 qpair failed and we were unable to recover it. 00:38:05.525 [2024-12-13 03:49:06.441452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.525 [2024-12-13 03:49:06.441469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.525 qpair failed and we were unable to recover it. 00:38:05.525 [2024-12-13 03:49:06.441560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.525 [2024-12-13 03:49:06.441572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.525 qpair failed and we were unable to recover it. 00:38:05.525 [2024-12-13 03:49:06.441646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.525 [2024-12-13 03:49:06.441659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.525 qpair failed and we were unable to recover it. 00:38:05.525 [2024-12-13 03:49:06.441735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.525 [2024-12-13 03:49:06.441747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.525 qpair failed and we were unable to recover it. 00:38:05.525 [2024-12-13 03:49:06.441845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.525 [2024-12-13 03:49:06.441858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.525 qpair failed and we were unable to recover it. 00:38:05.525 [2024-12-13 03:49:06.442061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.525 [2024-12-13 03:49:06.442078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.525 qpair failed and we were unable to recover it. 00:38:05.525 [2024-12-13 03:49:06.442234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.525 [2024-12-13 03:49:06.442248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.525 qpair failed and we were unable to recover it. 00:38:05.525 [2024-12-13 03:49:06.442332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.525 [2024-12-13 03:49:06.442345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.525 qpair failed and we were unable to recover it. 00:38:05.525 [2024-12-13 03:49:06.442532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.525 [2024-12-13 03:49:06.442574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.525 qpair failed and we were unable to recover it. 00:38:05.525 [2024-12-13 03:49:06.442772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.525 [2024-12-13 03:49:06.442814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.525 qpair failed and we were unable to recover it. 00:38:05.525 [2024-12-13 03:49:06.443020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.526 [2024-12-13 03:49:06.443062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.526 qpair failed and we were unable to recover it. 00:38:05.526 [2024-12-13 03:49:06.443221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.526 [2024-12-13 03:49:06.443263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.526 qpair failed and we were unable to recover it. 00:38:05.526 [2024-12-13 03:49:06.443406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.526 [2024-12-13 03:49:06.443447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.526 qpair failed and we were unable to recover it. 00:38:05.526 [2024-12-13 03:49:06.443665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.526 [2024-12-13 03:49:06.443705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.526 qpair failed and we were unable to recover it. 00:38:05.526 [2024-12-13 03:49:06.443891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.526 [2024-12-13 03:49:06.443905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.526 qpair failed and we were unable to recover it. 00:38:05.526 [2024-12-13 03:49:06.444024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.526 [2024-12-13 03:49:06.444068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.526 qpair failed and we were unable to recover it. 00:38:05.526 [2024-12-13 03:49:06.444309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.526 [2024-12-13 03:49:06.444333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.526 qpair failed and we were unable to recover it. 00:38:05.526 [2024-12-13 03:49:06.444440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.526 [2024-12-13 03:49:06.444461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.526 qpair failed and we were unable to recover it. 00:38:05.526 [2024-12-13 03:49:06.444655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.526 [2024-12-13 03:49:06.444699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.526 qpair failed and we were unable to recover it. 00:38:05.526 [2024-12-13 03:49:06.444915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.526 [2024-12-13 03:49:06.444974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.526 qpair failed and we were unable to recover it. 00:38:05.526 [2024-12-13 03:49:06.445176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.526 [2024-12-13 03:49:06.445190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.526 qpair failed and we were unable to recover it. 00:38:05.526 [2024-12-13 03:49:06.445262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.526 [2024-12-13 03:49:06.445275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.526 qpair failed and we were unable to recover it. 00:38:05.526 [2024-12-13 03:49:06.445408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.526 [2024-12-13 03:49:06.445422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.526 qpair failed and we were unable to recover it. 00:38:05.526 [2024-12-13 03:49:06.445574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.526 [2024-12-13 03:49:06.445588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.526 qpair failed and we were unable to recover it. 00:38:05.526 [2024-12-13 03:49:06.445688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.526 [2024-12-13 03:49:06.445700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.526 qpair failed and we were unable to recover it. 00:38:05.526 [2024-12-13 03:49:06.445786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.526 [2024-12-13 03:49:06.445798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.526 qpair failed and we were unable to recover it. 00:38:05.526 [2024-12-13 03:49:06.445981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.526 [2024-12-13 03:49:06.445996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.526 qpair failed and we were unable to recover it. 00:38:05.526 [2024-12-13 03:49:06.446085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.526 [2024-12-13 03:49:06.446099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.526 qpair failed and we were unable to recover it. 00:38:05.526 [2024-12-13 03:49:06.446276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.526 [2024-12-13 03:49:06.446290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.526 qpair failed and we were unable to recover it. 00:38:05.526 [2024-12-13 03:49:06.446439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.526 [2024-12-13 03:49:06.446453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.526 qpair failed and we were unable to recover it. 00:38:05.526 [2024-12-13 03:49:06.446620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.526 [2024-12-13 03:49:06.446634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.526 qpair failed and we were unable to recover it. 00:38:05.526 [2024-12-13 03:49:06.446725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.526 [2024-12-13 03:49:06.446739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.526 qpair failed and we were unable to recover it. 00:38:05.526 [2024-12-13 03:49:06.446895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.526 [2024-12-13 03:49:06.446909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.526 qpair failed and we were unable to recover it. 00:38:05.526 [2024-12-13 03:49:06.446996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.526 [2024-12-13 03:49:06.447008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.526 qpair failed and we were unable to recover it. 00:38:05.526 [2024-12-13 03:49:06.447089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.526 [2024-12-13 03:49:06.447101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.526 qpair failed and we were unable to recover it. 00:38:05.526 [2024-12-13 03:49:06.447184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.526 [2024-12-13 03:49:06.447196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.526 qpair failed and we were unable to recover it. 00:38:05.526 [2024-12-13 03:49:06.447327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.526 [2024-12-13 03:49:06.447339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.526 qpair failed and we were unable to recover it. 00:38:05.526 [2024-12-13 03:49:06.447436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.526 [2024-12-13 03:49:06.447452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.526 qpair failed and we were unable to recover it. 00:38:05.526 [2024-12-13 03:49:06.447543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.526 [2024-12-13 03:49:06.447556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.526 qpair failed and we were unable to recover it. 00:38:05.526 [2024-12-13 03:49:06.447643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.526 [2024-12-13 03:49:06.447655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.526 qpair failed and we were unable to recover it. 00:38:05.526 [2024-12-13 03:49:06.447823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.526 [2024-12-13 03:49:06.447837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.526 qpair failed and we were unable to recover it. 00:38:05.526 [2024-12-13 03:49:06.447984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.526 [2024-12-13 03:49:06.447999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.526 qpair failed and we were unable to recover it. 00:38:05.526 [2024-12-13 03:49:06.448092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.526 [2024-12-13 03:49:06.448105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.526 qpair failed and we were unable to recover it. 00:38:05.526 [2024-12-13 03:49:06.448246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.526 [2024-12-13 03:49:06.448259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.526 qpair failed and we were unable to recover it. 00:38:05.526 [2024-12-13 03:49:06.448330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.526 [2024-12-13 03:49:06.448343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.526 qpair failed and we were unable to recover it. 00:38:05.526 [2024-12-13 03:49:06.448434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.526 [2024-12-13 03:49:06.448449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.526 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.448628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.448641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.448781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.448795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.448950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.448964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.449107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.449148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.449352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.449394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.449584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.449625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.449754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.449795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.449942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.449985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.450134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.450176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.450365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.450378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.450508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.450521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.450672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.450686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.450777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.450790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.450874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.450887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.451055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.451069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.451269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.451283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.451355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.451368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.451539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.451553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.451641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.451654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.451794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.451809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.451949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.451963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.452055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.452072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.452146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.452159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.452302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.452316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.452470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.452484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.452636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.452650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.452762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.452787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.452941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.452988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.453192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.453237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.453396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.453411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.453645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.453688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.453945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.453987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.454124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.454164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.454297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.454311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.454394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.454407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.454554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.454567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.454723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.454737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.527 [2024-12-13 03:49:06.454818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.527 [2024-12-13 03:49:06.454832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.527 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.454907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.454926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.455062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.455078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.455161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.455174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.455249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.455262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.455345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.455358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.455506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.455520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.455725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.455739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.455834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.455846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.455988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.456002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.456076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.456089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.456173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.456185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.456263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.456275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.456367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.456379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.456462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.456475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.456539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.456551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.456627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.456640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.456809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.456823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.456966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.456980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.457055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.457067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.457139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.457152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.457285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.457298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.457381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.457393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.457489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.457502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.457645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.457658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.457734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.457746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.457882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.457896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.457980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.457994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.458174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.458216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.458374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.458423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.458577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.458632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.458807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.458859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.459087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.459134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.459341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.459363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.459449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.459470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.459641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.459664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.459823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.459845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.459932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.459953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.460107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.528 [2024-12-13 03:49:06.460124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.528 qpair failed and we were unable to recover it. 00:38:05.528 [2024-12-13 03:49:06.460216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.529 [2024-12-13 03:49:06.460230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.529 qpair failed and we were unable to recover it. 00:38:05.529 [2024-12-13 03:49:06.460314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.529 [2024-12-13 03:49:06.460326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.529 qpair failed and we were unable to recover it. 00:38:05.529 [2024-12-13 03:49:06.460479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.529 [2024-12-13 03:49:06.460493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.529 qpair failed and we were unable to recover it. 00:38:05.529 [2024-12-13 03:49:06.460635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.529 [2024-12-13 03:49:06.460653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.529 qpair failed and we were unable to recover it. 00:38:05.529 [2024-12-13 03:49:06.460865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.529 [2024-12-13 03:49:06.460907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.529 qpair failed and we were unable to recover it. 00:38:05.529 [2024-12-13 03:49:06.461132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.529 [2024-12-13 03:49:06.461174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.529 qpair failed and we were unable to recover it. 00:38:05.529 [2024-12-13 03:49:06.461374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.529 [2024-12-13 03:49:06.461421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.529 qpair failed and we were unable to recover it. 00:38:05.529 [2024-12-13 03:49:06.461637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.529 [2024-12-13 03:49:06.461679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.529 qpair failed and we were unable to recover it. 00:38:05.529 [2024-12-13 03:49:06.461875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.529 [2024-12-13 03:49:06.461889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.529 qpair failed and we were unable to recover it. 00:38:05.529 [2024-12-13 03:49:06.461986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.529 [2024-12-13 03:49:06.461999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.529 qpair failed and we were unable to recover it. 00:38:05.529 [2024-12-13 03:49:06.462078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.529 [2024-12-13 03:49:06.462091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.529 qpair failed and we were unable to recover it. 00:38:05.529 [2024-12-13 03:49:06.462237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.529 [2024-12-13 03:49:06.462250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.529 qpair failed and we were unable to recover it. 00:38:05.529 [2024-12-13 03:49:06.462393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.529 [2024-12-13 03:49:06.462406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.529 qpair failed and we were unable to recover it. 00:38:05.529 [2024-12-13 03:49:06.462503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.529 [2024-12-13 03:49:06.462516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.529 qpair failed and we were unable to recover it. 00:38:05.529 [2024-12-13 03:49:06.462652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.529 [2024-12-13 03:49:06.462666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.529 qpair failed and we were unable to recover it. 00:38:05.529 [2024-12-13 03:49:06.462740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.529 [2024-12-13 03:49:06.462752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.529 qpair failed and we were unable to recover it. 00:38:05.529 [2024-12-13 03:49:06.462890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.529 [2024-12-13 03:49:06.462905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.529 qpair failed and we were unable to recover it. 00:38:05.529 [2024-12-13 03:49:06.462984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.529 [2024-12-13 03:49:06.462997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.529 qpair failed and we were unable to recover it. 00:38:05.529 [2024-12-13 03:49:06.463071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.529 [2024-12-13 03:49:06.463083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.529 qpair failed and we were unable to recover it. 00:38:05.529 [2024-12-13 03:49:06.463159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.529 [2024-12-13 03:49:06.463171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.529 qpair failed and we were unable to recover it. 00:38:05.529 [2024-12-13 03:49:06.463256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.529 [2024-12-13 03:49:06.463270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.529 qpair failed and we were unable to recover it. 00:38:05.529 [2024-12-13 03:49:06.463404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.529 [2024-12-13 03:49:06.463423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.529 qpair failed and we were unable to recover it. 00:38:05.529 [2024-12-13 03:49:06.463564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.529 [2024-12-13 03:49:06.463577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.529 qpair failed and we were unable to recover it. 00:38:05.529 [2024-12-13 03:49:06.463669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.529 [2024-12-13 03:49:06.463682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.529 qpair failed and we were unable to recover it. 00:38:05.529 [2024-12-13 03:49:06.463772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.529 [2024-12-13 03:49:06.463786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.529 qpair failed and we were unable to recover it. 00:38:05.529 [2024-12-13 03:49:06.463867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.529 [2024-12-13 03:49:06.463881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.529 qpair failed and we were unable to recover it. 00:38:05.529 [2024-12-13 03:49:06.463959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.529 [2024-12-13 03:49:06.463972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.529 qpair failed and we were unable to recover it. 00:38:05.529 [2024-12-13 03:49:06.464118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.529 [2024-12-13 03:49:06.464130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.529 qpair failed and we were unable to recover it. 00:38:05.529 [2024-12-13 03:49:06.464276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.529 [2024-12-13 03:49:06.464294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.529 qpair failed and we were unable to recover it. 00:38:05.529 [2024-12-13 03:49:06.464434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.529 [2024-12-13 03:49:06.464448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.529 qpair failed and we were unable to recover it. 00:38:05.529 [2024-12-13 03:49:06.464552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.529 [2024-12-13 03:49:06.464579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.529 qpair failed and we were unable to recover it. 00:38:05.529 [2024-12-13 03:49:06.464758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.529 [2024-12-13 03:49:06.464784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.529 qpair failed and we were unable to recover it. 00:38:05.529 [2024-12-13 03:49:06.464908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.529 [2024-12-13 03:49:06.464936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.529 qpair failed and we were unable to recover it. 00:38:05.529 [2024-12-13 03:49:06.465041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.529 [2024-12-13 03:49:06.465056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.529 qpair failed and we were unable to recover it. 00:38:05.529 [2024-12-13 03:49:06.465216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.529 [2024-12-13 03:49:06.465230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.529 qpair failed and we were unable to recover it. 00:38:05.529 [2024-12-13 03:49:06.465385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.529 [2024-12-13 03:49:06.465398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.529 qpair failed and we were unable to recover it. 00:38:05.530 [2024-12-13 03:49:06.465487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.530 [2024-12-13 03:49:06.465500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.530 qpair failed and we were unable to recover it. 00:38:05.530 [2024-12-13 03:49:06.465664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.530 [2024-12-13 03:49:06.465678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.530 qpair failed and we were unable to recover it. 00:38:05.530 [2024-12-13 03:49:06.465904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.530 [2024-12-13 03:49:06.465921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.530 qpair failed and we were unable to recover it. 00:38:05.530 [2024-12-13 03:49:06.466056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.530 [2024-12-13 03:49:06.466069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.530 qpair failed and we were unable to recover it. 00:38:05.530 [2024-12-13 03:49:06.466152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.530 [2024-12-13 03:49:06.466165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.530 qpair failed and we were unable to recover it. 00:38:05.530 [2024-12-13 03:49:06.466252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.530 [2024-12-13 03:49:06.466266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.530 qpair failed and we were unable to recover it. 00:38:05.530 [2024-12-13 03:49:06.466409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.530 [2024-12-13 03:49:06.466423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.530 qpair failed and we were unable to recover it. 00:38:05.530 [2024-12-13 03:49:06.466499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.530 [2024-12-13 03:49:06.466515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.530 qpair failed and we were unable to recover it. 00:38:05.530 [2024-12-13 03:49:06.466660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.530 [2024-12-13 03:49:06.466674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.530 qpair failed and we were unable to recover it. 00:38:05.530 [2024-12-13 03:49:06.466746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.530 [2024-12-13 03:49:06.466758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.530 qpair failed and we were unable to recover it. 00:38:05.530 [2024-12-13 03:49:06.466858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.530 [2024-12-13 03:49:06.466872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.530 qpair failed and we were unable to recover it. 00:38:05.530 [2024-12-13 03:49:06.467038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.530 [2024-12-13 03:49:06.467052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.530 qpair failed and we were unable to recover it. 00:38:05.530 [2024-12-13 03:49:06.467199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.530 [2024-12-13 03:49:06.467213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.530 qpair failed and we were unable to recover it. 00:38:05.530 [2024-12-13 03:49:06.467382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.530 [2024-12-13 03:49:06.467424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.530 qpair failed and we were unable to recover it. 00:38:05.530 [2024-12-13 03:49:06.467568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.530 [2024-12-13 03:49:06.467609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.530 qpair failed and we were unable to recover it. 00:38:05.530 [2024-12-13 03:49:06.467800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.530 [2024-12-13 03:49:06.467842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.530 qpair failed and we were unable to recover it. 00:38:05.530 [2024-12-13 03:49:06.467971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.530 [2024-12-13 03:49:06.467985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.530 qpair failed and we were unable to recover it. 00:38:05.530 [2024-12-13 03:49:06.468134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.530 [2024-12-13 03:49:06.468148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.530 qpair failed and we were unable to recover it. 00:38:05.530 [2024-12-13 03:49:06.468239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.530 [2024-12-13 03:49:06.468252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.530 qpair failed and we were unable to recover it. 00:38:05.530 [2024-12-13 03:49:06.468478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.530 [2024-12-13 03:49:06.468492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.530 qpair failed and we were unable to recover it. 00:38:05.530 [2024-12-13 03:49:06.468592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.530 [2024-12-13 03:49:06.468605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.530 qpair failed and we were unable to recover it. 00:38:05.530 [2024-12-13 03:49:06.468681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.530 [2024-12-13 03:49:06.468695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.530 qpair failed and we were unable to recover it. 00:38:05.530 [2024-12-13 03:49:06.468767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.530 [2024-12-13 03:49:06.468779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.530 qpair failed and we were unable to recover it. 00:38:05.530 [2024-12-13 03:49:06.468865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.530 [2024-12-13 03:49:06.468878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.530 qpair failed and we were unable to recover it. 00:38:05.530 [2024-12-13 03:49:06.469028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.530 [2024-12-13 03:49:06.469042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.530 qpair failed and we were unable to recover it. 00:38:05.530 [2024-12-13 03:49:06.469120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.530 [2024-12-13 03:49:06.469134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.530 qpair failed and we were unable to recover it. 00:38:05.530 [2024-12-13 03:49:06.469274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.530 [2024-12-13 03:49:06.469287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.530 qpair failed and we were unable to recover it. 00:38:05.530 [2024-12-13 03:49:06.469367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.530 [2024-12-13 03:49:06.469380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.530 qpair failed and we were unable to recover it. 00:38:05.530 [2024-12-13 03:49:06.469517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.530 [2024-12-13 03:49:06.469531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.530 qpair failed and we were unable to recover it. 00:38:05.530 [2024-12-13 03:49:06.469760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.530 [2024-12-13 03:49:06.469774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.530 qpair failed and we were unable to recover it. 00:38:05.530 [2024-12-13 03:49:06.469914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.530 [2024-12-13 03:49:06.469933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.530 qpair failed and we were unable to recover it. 00:38:05.530 [2024-12-13 03:49:06.470022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.530 [2024-12-13 03:49:06.470036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.530 qpair failed and we were unable to recover it. 00:38:05.530 [2024-12-13 03:49:06.470122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.470135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.531 [2024-12-13 03:49:06.470307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.470321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.531 [2024-12-13 03:49:06.470461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.470475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.531 [2024-12-13 03:49:06.470622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.470636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.531 [2024-12-13 03:49:06.470775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.470787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.531 [2024-12-13 03:49:06.470868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.470881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.531 [2024-12-13 03:49:06.470951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.470964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.531 [2024-12-13 03:49:06.471112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.471125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.531 [2024-12-13 03:49:06.471216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.471230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.531 [2024-12-13 03:49:06.471302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.471314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.531 [2024-12-13 03:49:06.471406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.471420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.531 [2024-12-13 03:49:06.471572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.471586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.531 [2024-12-13 03:49:06.471669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.471683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.531 [2024-12-13 03:49:06.471763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.471777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.531 [2024-12-13 03:49:06.471849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.471863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.531 [2024-12-13 03:49:06.472085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.472102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.531 [2024-12-13 03:49:06.472242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.472261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.531 [2024-12-13 03:49:06.472330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.472343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.531 [2024-12-13 03:49:06.472498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.472511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.531 [2024-12-13 03:49:06.472680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.472695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.531 [2024-12-13 03:49:06.472782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.472799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.531 [2024-12-13 03:49:06.472883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.472897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.531 [2024-12-13 03:49:06.473004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.473029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.531 [2024-12-13 03:49:06.473248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.473291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.531 [2024-12-13 03:49:06.473552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.473593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.531 [2024-12-13 03:49:06.473735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.473778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.531 [2024-12-13 03:49:06.473996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.474010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.531 [2024-12-13 03:49:06.474155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.474168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.531 [2024-12-13 03:49:06.474389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.474402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.531 [2024-12-13 03:49:06.474494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.474508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.531 [2024-12-13 03:49:06.474600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.474614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.531 [2024-12-13 03:49:06.474789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.474803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.531 [2024-12-13 03:49:06.474968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.474984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.531 [2024-12-13 03:49:06.475084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.475097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.531 [2024-12-13 03:49:06.475189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.475201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.531 [2024-12-13 03:49:06.475290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.475305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.531 [2024-12-13 03:49:06.475386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.475399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.531 [2024-12-13 03:49:06.475478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.531 [2024-12-13 03:49:06.475491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.531 qpair failed and we were unable to recover it. 00:38:05.532 [2024-12-13 03:49:06.475637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.532 [2024-12-13 03:49:06.475652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.532 qpair failed and we were unable to recover it. 00:38:05.532 [2024-12-13 03:49:06.475741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.532 [2024-12-13 03:49:06.475754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.532 qpair failed and we were unable to recover it. 00:38:05.532 [2024-12-13 03:49:06.475925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.532 [2024-12-13 03:49:06.475939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.532 qpair failed and we were unable to recover it. 00:38:05.532 [2024-12-13 03:49:06.476018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.532 [2024-12-13 03:49:06.476032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.532 qpair failed and we were unable to recover it. 00:38:05.532 [2024-12-13 03:49:06.476132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325d00 is same with the state(6) to be set 00:38:05.532 [2024-12-13 03:49:06.476260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.532 [2024-12-13 03:49:06.476286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.532 qpair failed and we were unable to recover it. 00:38:05.532 [2024-12-13 03:49:06.476385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.532 [2024-12-13 03:49:06.476406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.532 qpair failed and we were unable to recover it. 00:38:05.532 [2024-12-13 03:49:06.476507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.532 [2024-12-13 03:49:06.476527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.532 qpair failed and we were unable to recover it. 00:38:05.532 [2024-12-13 03:49:06.476697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.532 [2024-12-13 03:49:06.476719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.532 qpair failed and we were unable to recover it. 00:38:05.532 [2024-12-13 03:49:06.476812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.532 [2024-12-13 03:49:06.476833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.532 qpair failed and we were unable to recover it. 00:38:05.532 [2024-12-13 03:49:06.476915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.532 [2024-12-13 03:49:06.476950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.532 qpair failed and we were unable to recover it. 00:38:05.532 [2024-12-13 03:49:06.477106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.532 [2024-12-13 03:49:06.477127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.532 qpair failed and we were unable to recover it. 00:38:05.532 [2024-12-13 03:49:06.477316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.532 [2024-12-13 03:49:06.477359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.532 qpair failed and we were unable to recover it. 00:38:05.532 [2024-12-13 03:49:06.477511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.532 [2024-12-13 03:49:06.477553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.532 qpair failed and we were unable to recover it. 00:38:05.532 [2024-12-13 03:49:06.477689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.532 [2024-12-13 03:49:06.477729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.532 qpair failed and we were unable to recover it. 00:38:05.532 [2024-12-13 03:49:06.477969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.532 [2024-12-13 03:49:06.478011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.532 qpair failed and we were unable to recover it. 00:38:05.532 [2024-12-13 03:49:06.478150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.532 [2024-12-13 03:49:06.478192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.532 qpair failed and we were unable to recover it. 00:38:05.532 [2024-12-13 03:49:06.478489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.532 [2024-12-13 03:49:06.478530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.532 qpair failed and we were unable to recover it. 00:38:05.532 [2024-12-13 03:49:06.478740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.532 [2024-12-13 03:49:06.478782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.532 qpair failed and we were unable to recover it. 00:38:05.532 [2024-12-13 03:49:06.478911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.532 [2024-12-13 03:49:06.478967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.532 qpair failed and we were unable to recover it. 00:38:05.532 [2024-12-13 03:49:06.479146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.532 [2024-12-13 03:49:06.479168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.532 qpair failed and we were unable to recover it. 00:38:05.532 [2024-12-13 03:49:06.479331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.532 [2024-12-13 03:49:06.479372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.532 qpair failed and we were unable to recover it. 00:38:05.532 [2024-12-13 03:49:06.479501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.532 [2024-12-13 03:49:06.479543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.532 qpair failed and we were unable to recover it. 00:38:05.532 [2024-12-13 03:49:06.479757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.532 [2024-12-13 03:49:06.479799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.532 qpair failed and we were unable to recover it. 00:38:05.532 [2024-12-13 03:49:06.480017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.532 [2024-12-13 03:49:06.480039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.532 qpair failed and we were unable to recover it. 00:38:05.532 [2024-12-13 03:49:06.480150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.532 [2024-12-13 03:49:06.480170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.532 qpair failed and we were unable to recover it. 00:38:05.532 [2024-12-13 03:49:06.480274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.532 [2024-12-13 03:49:06.480292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.532 qpair failed and we were unable to recover it. 00:38:05.532 [2024-12-13 03:49:06.480364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.532 [2024-12-13 03:49:06.480378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.532 qpair failed and we were unable to recover it. 00:38:05.532 [2024-12-13 03:49:06.480478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.532 [2024-12-13 03:49:06.480492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.532 qpair failed and we were unable to recover it. 00:38:05.532 [2024-12-13 03:49:06.480562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.532 [2024-12-13 03:49:06.480578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.532 qpair failed and we were unable to recover it. 00:38:05.532 [2024-12-13 03:49:06.480827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.532 [2024-12-13 03:49:06.480872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.532 qpair failed and we were unable to recover it. 00:38:05.532 [2024-12-13 03:49:06.481016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.532 [2024-12-13 03:49:06.481067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.532 qpair failed and we were unable to recover it. 00:38:05.532 [2024-12-13 03:49:06.481220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.532 [2024-12-13 03:49:06.481290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.532 qpair failed and we were unable to recover it. 00:38:05.533 [2024-12-13 03:49:06.481432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.533 [2024-12-13 03:49:06.481445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.533 qpair failed and we were unable to recover it. 00:38:05.533 [2024-12-13 03:49:06.481546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.533 [2024-12-13 03:49:06.481559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.533 qpair failed and we were unable to recover it. 00:38:05.533 [2024-12-13 03:49:06.481704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.533 [2024-12-13 03:49:06.481717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.533 qpair failed and we were unable to recover it. 00:38:05.533 [2024-12-13 03:49:06.481798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.533 [2024-12-13 03:49:06.481810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.533 qpair failed and we were unable to recover it. 00:38:05.533 [2024-12-13 03:49:06.481954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.533 [2024-12-13 03:49:06.481978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.533 qpair failed and we were unable to recover it. 00:38:05.533 [2024-12-13 03:49:06.482053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.533 [2024-12-13 03:49:06.482065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.533 qpair failed and we were unable to recover it. 00:38:05.533 [2024-12-13 03:49:06.482153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.533 [2024-12-13 03:49:06.482175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.533 qpair failed and we were unable to recover it. 00:38:05.533 [2024-12-13 03:49:06.482327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.533 [2024-12-13 03:49:06.482342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.533 qpair failed and we were unable to recover it. 00:38:05.533 [2024-12-13 03:49:06.482491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.533 [2024-12-13 03:49:06.482505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.533 qpair failed and we were unable to recover it. 00:38:05.533 [2024-12-13 03:49:06.482651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.533 [2024-12-13 03:49:06.482664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.533 qpair failed and we were unable to recover it. 00:38:05.533 [2024-12-13 03:49:06.482744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.533 [2024-12-13 03:49:06.482757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.533 qpair failed and we were unable to recover it. 00:38:05.533 [2024-12-13 03:49:06.482899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.533 [2024-12-13 03:49:06.482912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.533 qpair failed and we were unable to recover it. 00:38:05.533 [2024-12-13 03:49:06.483063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.533 [2024-12-13 03:49:06.483078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.533 qpair failed and we were unable to recover it. 00:38:05.533 [2024-12-13 03:49:06.483212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.533 [2024-12-13 03:49:06.483228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.533 qpair failed and we were unable to recover it. 00:38:05.533 [2024-12-13 03:49:06.483426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.533 [2024-12-13 03:49:06.483469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.533 qpair failed and we were unable to recover it. 00:38:05.533 [2024-12-13 03:49:06.483631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.533 [2024-12-13 03:49:06.483706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.533 qpair failed and we were unable to recover it. 00:38:05.533 [2024-12-13 03:49:06.483866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.533 [2024-12-13 03:49:06.483915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.533 qpair failed and we were unable to recover it. 00:38:05.533 [2024-12-13 03:49:06.484111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.533 [2024-12-13 03:49:06.484133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.533 qpair failed and we were unable to recover it. 00:38:05.533 [2024-12-13 03:49:06.484216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.533 [2024-12-13 03:49:06.484237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.533 qpair failed and we were unable to recover it. 00:38:05.533 [2024-12-13 03:49:06.484369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.533 [2024-12-13 03:49:06.484390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.533 qpair failed and we were unable to recover it. 00:38:05.533 [2024-12-13 03:49:06.484588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.533 [2024-12-13 03:49:06.484609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.533 qpair failed and we were unable to recover it. 00:38:05.533 [2024-12-13 03:49:06.484862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.533 [2024-12-13 03:49:06.484879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.533 qpair failed and we were unable to recover it. 00:38:05.533 [2024-12-13 03:49:06.485135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.533 [2024-12-13 03:49:06.485160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.533 qpair failed and we were unable to recover it. 00:38:05.533 [2024-12-13 03:49:06.485324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.533 [2024-12-13 03:49:06.485345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.533 qpair failed and we were unable to recover it. 00:38:05.533 [2024-12-13 03:49:06.485451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.533 [2024-12-13 03:49:06.485472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.533 qpair failed and we were unable to recover it. 00:38:05.533 [2024-12-13 03:49:06.485660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.533 [2024-12-13 03:49:06.485681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.533 qpair failed and we were unable to recover it. 00:38:05.533 [2024-12-13 03:49:06.485787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.533 [2024-12-13 03:49:06.485809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.533 qpair failed and we were unable to recover it. 00:38:05.533 [2024-12-13 03:49:06.486052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.533 [2024-12-13 03:49:06.486095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.533 qpair failed and we were unable to recover it. 00:38:05.533 [2024-12-13 03:49:06.486265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.533 [2024-12-13 03:49:06.486308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.533 qpair failed and we were unable to recover it. 00:38:05.533 [2024-12-13 03:49:06.486433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.533 [2024-12-13 03:49:06.486474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.533 qpair failed and we were unable to recover it. 00:38:05.533 [2024-12-13 03:49:06.486631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.533 [2024-12-13 03:49:06.486675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.533 qpair failed and we were unable to recover it. 00:38:05.533 [2024-12-13 03:49:06.486891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.533 [2024-12-13 03:49:06.486948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.533 qpair failed and we were unable to recover it. 00:38:05.533 [2024-12-13 03:49:06.487293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.533 [2024-12-13 03:49:06.487341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.533 qpair failed and we were unable to recover it. 00:38:05.533 [2024-12-13 03:49:06.487495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.533 [2024-12-13 03:49:06.487541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.533 qpair failed and we were unable to recover it. 00:38:05.533 [2024-12-13 03:49:06.487682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.534 [2024-12-13 03:49:06.487724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.534 qpair failed and we were unable to recover it. 00:38:05.534 [2024-12-13 03:49:06.487943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.534 [2024-12-13 03:49:06.487986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.534 qpair failed and we were unable to recover it. 00:38:05.534 [2024-12-13 03:49:06.488188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.534 [2024-12-13 03:49:06.488229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.534 qpair failed and we were unable to recover it. 00:38:05.534 [2024-12-13 03:49:06.488550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.534 [2024-12-13 03:49:06.488591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.534 qpair failed and we were unable to recover it. 00:38:05.534 [2024-12-13 03:49:06.488847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.534 [2024-12-13 03:49:06.488897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.534 qpair failed and we were unable to recover it. 00:38:05.534 [2024-12-13 03:49:06.489183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.534 [2024-12-13 03:49:06.489227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.534 qpair failed and we were unable to recover it. 00:38:05.534 [2024-12-13 03:49:06.489455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.534 [2024-12-13 03:49:06.489496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.534 qpair failed and we were unable to recover it. 00:38:05.534 [2024-12-13 03:49:06.489714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.534 [2024-12-13 03:49:06.489756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.534 qpair failed and we were unable to recover it. 00:38:05.534 [2024-12-13 03:49:06.489952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.534 [2024-12-13 03:49:06.489998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.534 qpair failed and we were unable to recover it. 00:38:05.534 [2024-12-13 03:49:06.490219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.534 [2024-12-13 03:49:06.490240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.534 qpair failed and we were unable to recover it. 00:38:05.534 [2024-12-13 03:49:06.490385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.534 [2024-12-13 03:49:06.490404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.534 qpair failed and we were unable to recover it. 00:38:05.534 [2024-12-13 03:49:06.490556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.534 [2024-12-13 03:49:06.490570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.534 qpair failed and we were unable to recover it. 00:38:05.534 [2024-12-13 03:49:06.490655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.534 [2024-12-13 03:49:06.490668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.534 qpair failed and we were unable to recover it. 00:38:05.534 [2024-12-13 03:49:06.490818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.534 [2024-12-13 03:49:06.490833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.534 qpair failed and we were unable to recover it. 00:38:05.534 [2024-12-13 03:49:06.490927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.534 [2024-12-13 03:49:06.490940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.534 qpair failed and we were unable to recover it. 00:38:05.534 [2024-12-13 03:49:06.491030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.534 [2024-12-13 03:49:06.491043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.534 qpair failed and we were unable to recover it. 00:38:05.534 [2024-12-13 03:49:06.491119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.534 [2024-12-13 03:49:06.491130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.534 qpair failed and we were unable to recover it. 00:38:05.534 [2024-12-13 03:49:06.491196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.534 [2024-12-13 03:49:06.491209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.534 qpair failed and we were unable to recover it. 00:38:05.534 [2024-12-13 03:49:06.491360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.534 [2024-12-13 03:49:06.491373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.534 qpair failed and we were unable to recover it. 00:38:05.534 [2024-12-13 03:49:06.491459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.534 [2024-12-13 03:49:06.491473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.534 qpair failed and we were unable to recover it. 00:38:05.534 [2024-12-13 03:49:06.491553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.534 [2024-12-13 03:49:06.491565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.534 qpair failed and we were unable to recover it. 00:38:05.534 [2024-12-13 03:49:06.491644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.534 [2024-12-13 03:49:06.491657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.534 qpair failed and we were unable to recover it. 00:38:05.534 [2024-12-13 03:49:06.491730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.534 [2024-12-13 03:49:06.491742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.534 qpair failed and we were unable to recover it. 00:38:05.534 [2024-12-13 03:49:06.491816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.534 [2024-12-13 03:49:06.491829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.534 qpair failed and we were unable to recover it. 00:38:05.534 [2024-12-13 03:49:06.491913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.534 [2024-12-13 03:49:06.491938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.534 qpair failed and we were unable to recover it. 00:38:05.534 [2024-12-13 03:49:06.492023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.534 [2024-12-13 03:49:06.492036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.534 qpair failed and we were unable to recover it. 00:38:05.534 [2024-12-13 03:49:06.492183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.534 [2024-12-13 03:49:06.492195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.534 qpair failed and we were unable to recover it. 00:38:05.534 [2024-12-13 03:49:06.492286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.534 [2024-12-13 03:49:06.492299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.534 qpair failed and we were unable to recover it. 00:38:05.534 [2024-12-13 03:49:06.492379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.534 [2024-12-13 03:49:06.492397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.534 qpair failed and we were unable to recover it. 00:38:05.534 [2024-12-13 03:49:06.492482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.534 [2024-12-13 03:49:06.492495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.534 qpair failed and we were unable to recover it. 00:38:05.534 [2024-12-13 03:49:06.492568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.534 [2024-12-13 03:49:06.492581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.534 qpair failed and we were unable to recover it. 00:38:05.534 [2024-12-13 03:49:06.492655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.534 [2024-12-13 03:49:06.492669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.534 qpair failed and we were unable to recover it. 00:38:05.534 [2024-12-13 03:49:06.492765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.534 [2024-12-13 03:49:06.492778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.534 qpair failed and we were unable to recover it. 00:38:05.534 [2024-12-13 03:49:06.492862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.534 [2024-12-13 03:49:06.492874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.534 qpair failed and we were unable to recover it. 00:38:05.534 [2024-12-13 03:49:06.492950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.535 [2024-12-13 03:49:06.492963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.535 qpair failed and we were unable to recover it. 00:38:05.535 [2024-12-13 03:49:06.493029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.535 [2024-12-13 03:49:06.493041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.535 qpair failed and we were unable to recover it. 00:38:05.535 [2024-12-13 03:49:06.493115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.535 [2024-12-13 03:49:06.493128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.535 qpair failed and we were unable to recover it. 00:38:05.535 [2024-12-13 03:49:06.493212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.535 [2024-12-13 03:49:06.493224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.535 qpair failed and we were unable to recover it. 00:38:05.535 [2024-12-13 03:49:06.493372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.535 [2024-12-13 03:49:06.493384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.535 qpair failed and we were unable to recover it. 00:38:05.535 [2024-12-13 03:49:06.493449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.535 [2024-12-13 03:49:06.493461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.535 qpair failed and we were unable to recover it. 00:38:05.535 [2024-12-13 03:49:06.493527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.535 [2024-12-13 03:49:06.493540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.535 qpair failed and we were unable to recover it. 00:38:05.535 [2024-12-13 03:49:06.493606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.535 [2024-12-13 03:49:06.493619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.535 qpair failed and we were unable to recover it. 00:38:05.535 [2024-12-13 03:49:06.493849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.535 [2024-12-13 03:49:06.493862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.535 qpair failed and we were unable to recover it. 00:38:05.535 [2024-12-13 03:49:06.494007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.535 [2024-12-13 03:49:06.494020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.535 qpair failed and we were unable to recover it. 00:38:05.535 [2024-12-13 03:49:06.494170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.535 [2024-12-13 03:49:06.494187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.535 qpair failed and we were unable to recover it. 00:38:05.535 [2024-12-13 03:49:06.494263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.535 [2024-12-13 03:49:06.494275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.535 qpair failed and we were unable to recover it. 00:38:05.535 [2024-12-13 03:49:06.494357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.535 [2024-12-13 03:49:06.494370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.535 qpair failed and we were unable to recover it. 00:38:05.535 [2024-12-13 03:49:06.494522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.535 [2024-12-13 03:49:06.494537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.535 qpair failed and we were unable to recover it. 00:38:05.535 [2024-12-13 03:49:06.494634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.535 [2024-12-13 03:49:06.494647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.535 qpair failed and we were unable to recover it. 00:38:05.535 [2024-12-13 03:49:06.494781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.535 [2024-12-13 03:49:06.494794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.535 qpair failed and we were unable to recover it. 00:38:05.535 [2024-12-13 03:49:06.494889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.535 [2024-12-13 03:49:06.494902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.535 qpair failed and we were unable to recover it. 00:38:05.535 [2024-12-13 03:49:06.495055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.535 [2024-12-13 03:49:06.495069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.535 qpair failed and we were unable to recover it. 00:38:05.535 [2024-12-13 03:49:06.495210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.535 [2024-12-13 03:49:06.495223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.535 qpair failed and we were unable to recover it. 00:38:05.535 [2024-12-13 03:49:06.495309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.535 [2024-12-13 03:49:06.495322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.535 qpair failed and we were unable to recover it. 00:38:05.535 [2024-12-13 03:49:06.495387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.535 [2024-12-13 03:49:06.495399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.535 qpair failed and we were unable to recover it. 00:38:05.535 [2024-12-13 03:49:06.495534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.535 [2024-12-13 03:49:06.495548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.535 qpair failed and we were unable to recover it. 00:38:05.535 [2024-12-13 03:49:06.495637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.535 [2024-12-13 03:49:06.495650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.535 qpair failed and we were unable to recover it. 00:38:05.535 [2024-12-13 03:49:06.495788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.535 [2024-12-13 03:49:06.495801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.535 qpair failed and we were unable to recover it. 00:38:05.535 [2024-12-13 03:49:06.495883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.535 [2024-12-13 03:49:06.495895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.535 qpair failed and we were unable to recover it. 00:38:05.535 [2024-12-13 03:49:06.495981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.535 [2024-12-13 03:49:06.495994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.535 qpair failed and we were unable to recover it. 00:38:05.535 [2024-12-13 03:49:06.496125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.535 [2024-12-13 03:49:06.496139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.535 qpair failed and we were unable to recover it. 00:38:05.535 [2024-12-13 03:49:06.496224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.535 [2024-12-13 03:49:06.496236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.535 qpair failed and we were unable to recover it. 00:38:05.535 [2024-12-13 03:49:06.496365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.535 [2024-12-13 03:49:06.496379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.535 qpair failed and we were unable to recover it. 00:38:05.535 [2024-12-13 03:49:06.496530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.535 [2024-12-13 03:49:06.496543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.535 qpair failed and we were unable to recover it. 00:38:05.535 [2024-12-13 03:49:06.496677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.535 [2024-12-13 03:49:06.496692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.536 qpair failed and we were unable to recover it. 00:38:05.536 [2024-12-13 03:49:06.496852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.536 [2024-12-13 03:49:06.496867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.536 qpair failed and we were unable to recover it. 00:38:05.536 [2024-12-13 03:49:06.496945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.536 [2024-12-13 03:49:06.496958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.536 qpair failed and we were unable to recover it. 00:38:05.536 [2024-12-13 03:49:06.497102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.536 [2024-12-13 03:49:06.497117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.536 qpair failed and we were unable to recover it. 00:38:05.536 [2024-12-13 03:49:06.497197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.536 [2024-12-13 03:49:06.497209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.536 qpair failed and we were unable to recover it. 00:38:05.536 [2024-12-13 03:49:06.497392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.536 [2024-12-13 03:49:06.497407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.536 qpair failed and we were unable to recover it. 00:38:05.536 [2024-12-13 03:49:06.497499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.536 [2024-12-13 03:49:06.497513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.536 qpair failed and we were unable to recover it. 00:38:05.536 [2024-12-13 03:49:06.497729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.536 [2024-12-13 03:49:06.497755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.536 qpair failed and we were unable to recover it. 00:38:05.536 [2024-12-13 03:49:06.497929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.536 [2024-12-13 03:49:06.497952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.536 qpair failed and we were unable to recover it. 00:38:05.536 [2024-12-13 03:49:06.498128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.536 [2024-12-13 03:49:06.498173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.536 qpair failed and we were unable to recover it. 00:38:05.536 [2024-12-13 03:49:06.498302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.536 [2024-12-13 03:49:06.498343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.536 qpair failed and we were unable to recover it. 00:38:05.536 [2024-12-13 03:49:06.498574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.536 [2024-12-13 03:49:06.498616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.536 qpair failed and we were unable to recover it. 00:38:05.536 [2024-12-13 03:49:06.498766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.536 [2024-12-13 03:49:06.498809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.536 qpair failed and we were unable to recover it. 00:38:05.536 [2024-12-13 03:49:06.498970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.536 [2024-12-13 03:49:06.498991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.536 qpair failed and we were unable to recover it. 00:38:05.536 [2024-12-13 03:49:06.499150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.536 [2024-12-13 03:49:06.499170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.536 qpair failed and we were unable to recover it. 00:38:05.536 [2024-12-13 03:49:06.499268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.536 [2024-12-13 03:49:06.499292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.536 qpair failed and we were unable to recover it. 00:38:05.536 [2024-12-13 03:49:06.499473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.536 [2024-12-13 03:49:06.499489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.536 qpair failed and we were unable to recover it. 00:38:05.536 [2024-12-13 03:49:06.499574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.536 [2024-12-13 03:49:06.499587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.536 qpair failed and we were unable to recover it. 00:38:05.536 [2024-12-13 03:49:06.499813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.536 [2024-12-13 03:49:06.499826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.536 qpair failed and we were unable to recover it. 00:38:05.536 [2024-12-13 03:49:06.499892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.536 [2024-12-13 03:49:06.499905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.536 qpair failed and we were unable to recover it. 00:38:05.536 [2024-12-13 03:49:06.500111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.536 [2024-12-13 03:49:06.500128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.536 qpair failed and we were unable to recover it. 00:38:05.536 [2024-12-13 03:49:06.500257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.536 [2024-12-13 03:49:06.500271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.536 qpair failed and we were unable to recover it. 00:38:05.536 [2024-12-13 03:49:06.500489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.536 [2024-12-13 03:49:06.500530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.536 qpair failed and we were unable to recover it. 00:38:05.536 [2024-12-13 03:49:06.500826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.536 [2024-12-13 03:49:06.500867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.536 qpair failed and we were unable to recover it. 00:38:05.536 [2024-12-13 03:49:06.501091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.536 [2024-12-13 03:49:06.501133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.536 qpair failed and we were unable to recover it. 00:38:05.536 [2024-12-13 03:49:06.501279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.536 [2024-12-13 03:49:06.501320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.536 qpair failed and we were unable to recover it. 00:38:05.536 [2024-12-13 03:49:06.501546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.536 [2024-12-13 03:49:06.501588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.536 qpair failed and we were unable to recover it. 00:38:05.536 [2024-12-13 03:49:06.501799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.536 [2024-12-13 03:49:06.501839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.536 qpair failed and we were unable to recover it. 00:38:05.536 [2024-12-13 03:49:06.502002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.536 [2024-12-13 03:49:06.502046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.536 qpair failed and we were unable to recover it. 00:38:05.536 [2024-12-13 03:49:06.502183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.536 [2024-12-13 03:49:06.502228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.536 qpair failed and we were unable to recover it. 00:38:05.536 [2024-12-13 03:49:06.502301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.536 [2024-12-13 03:49:06.502314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.536 qpair failed and we were unable to recover it. 00:38:05.536 [2024-12-13 03:49:06.502512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.536 [2024-12-13 03:49:06.502525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.536 qpair failed and we were unable to recover it. 00:38:05.536 [2024-12-13 03:49:06.502615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.536 [2024-12-13 03:49:06.502629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.536 qpair failed and we were unable to recover it. 00:38:05.536 [2024-12-13 03:49:06.502697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.537 [2024-12-13 03:49:06.502709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.537 qpair failed and we were unable to recover it. 00:38:05.537 [2024-12-13 03:49:06.502859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.537 [2024-12-13 03:49:06.502873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.537 qpair failed and we were unable to recover it. 00:38:05.537 [2024-12-13 03:49:06.502973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.537 [2024-12-13 03:49:06.502987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.537 qpair failed and we were unable to recover it. 00:38:05.537 [2024-12-13 03:49:06.503059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.537 [2024-12-13 03:49:06.503071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.537 qpair failed and we were unable to recover it. 00:38:05.537 [2024-12-13 03:49:06.503208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.537 [2024-12-13 03:49:06.503232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.537 qpair failed and we were unable to recover it. 00:38:05.537 [2024-12-13 03:49:06.503303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.537 [2024-12-13 03:49:06.503316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.537 qpair failed and we were unable to recover it. 00:38:05.537 [2024-12-13 03:49:06.503397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.537 [2024-12-13 03:49:06.503410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.537 qpair failed and we were unable to recover it. 00:38:05.537 [2024-12-13 03:49:06.503586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.537 [2024-12-13 03:49:06.503599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.537 qpair failed and we were unable to recover it. 00:38:05.537 [2024-12-13 03:49:06.503747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.537 [2024-12-13 03:49:06.503761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.537 qpair failed and we were unable to recover it. 00:38:05.537 [2024-12-13 03:49:06.503938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.537 [2024-12-13 03:49:06.503952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.537 qpair failed and we were unable to recover it. 00:38:05.537 [2024-12-13 03:49:06.504033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.537 [2024-12-13 03:49:06.504045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.537 qpair failed and we were unable to recover it. 00:38:05.537 [2024-12-13 03:49:06.504122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.537 [2024-12-13 03:49:06.504134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.537 qpair failed and we were unable to recover it. 00:38:05.537 [2024-12-13 03:49:06.504213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.537 [2024-12-13 03:49:06.504227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.537 qpair failed and we were unable to recover it. 00:38:05.537 [2024-12-13 03:49:06.504321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.537 [2024-12-13 03:49:06.504334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.537 qpair failed and we were unable to recover it. 00:38:05.537 [2024-12-13 03:49:06.504599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.537 [2024-12-13 03:49:06.504626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.537 qpair failed and we were unable to recover it. 00:38:05.537 [2024-12-13 03:49:06.504733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.537 [2024-12-13 03:49:06.504756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.537 qpair failed and we were unable to recover it. 00:38:05.537 [2024-12-13 03:49:06.504949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.537 [2024-12-13 03:49:06.504974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.537 qpair failed and we were unable to recover it. 00:38:05.537 [2024-12-13 03:49:06.505061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.537 [2024-12-13 03:49:06.505077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.537 qpair failed and we were unable to recover it. 00:38:05.537 [2024-12-13 03:49:06.505169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.537 [2024-12-13 03:49:06.505183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.537 qpair failed and we were unable to recover it. 00:38:05.537 [2024-12-13 03:49:06.505265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.537 [2024-12-13 03:49:06.505277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.537 qpair failed and we were unable to recover it. 00:38:05.537 [2024-12-13 03:49:06.505370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.537 [2024-12-13 03:49:06.505383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.537 qpair failed and we were unable to recover it. 00:38:05.537 [2024-12-13 03:49:06.505545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.537 [2024-12-13 03:49:06.505558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.537 qpair failed and we were unable to recover it. 00:38:05.537 [2024-12-13 03:49:06.505626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.537 [2024-12-13 03:49:06.505638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.537 qpair failed and we were unable to recover it. 00:38:05.537 [2024-12-13 03:49:06.505772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.537 [2024-12-13 03:49:06.505785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.537 qpair failed and we were unable to recover it. 00:38:05.537 [2024-12-13 03:49:06.505940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.537 [2024-12-13 03:49:06.505954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.537 qpair failed and we were unable to recover it. 00:38:05.537 [2024-12-13 03:49:06.506026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.537 [2024-12-13 03:49:06.506038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.537 qpair failed and we were unable to recover it. 00:38:05.537 [2024-12-13 03:49:06.506294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.537 [2024-12-13 03:49:06.506308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.537 qpair failed and we were unable to recover it. 00:38:05.537 [2024-12-13 03:49:06.506390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.537 [2024-12-13 03:49:06.506405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.537 qpair failed and we were unable to recover it. 00:38:05.537 [2024-12-13 03:49:06.506624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.537 [2024-12-13 03:49:06.506637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.537 qpair failed and we were unable to recover it. 00:38:05.537 [2024-12-13 03:49:06.506785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.537 [2024-12-13 03:49:06.506799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.537 qpair failed and we were unable to recover it. 00:38:05.537 [2024-12-13 03:49:06.506886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.537 [2024-12-13 03:49:06.506902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.537 qpair failed and we were unable to recover it. 00:38:05.537 [2024-12-13 03:49:06.507057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.537 [2024-12-13 03:49:06.507070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.537 qpair failed and we were unable to recover it. 00:38:05.537 [2024-12-13 03:49:06.507223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.537 [2024-12-13 03:49:06.507259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.537 qpair failed and we were unable to recover it. 00:38:05.537 [2024-12-13 03:49:06.507392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.537 [2024-12-13 03:49:06.507435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.537 qpair failed and we were unable to recover it. 00:38:05.537 [2024-12-13 03:49:06.507646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.538 [2024-12-13 03:49:06.507691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.538 qpair failed and we were unable to recover it. 00:38:05.538 [2024-12-13 03:49:06.507975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.538 [2024-12-13 03:49:06.508020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.538 qpair failed and we were unable to recover it. 00:38:05.538 [2024-12-13 03:49:06.508295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.538 [2024-12-13 03:49:06.508338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.538 qpair failed and we were unable to recover it. 00:38:05.538 [2024-12-13 03:49:06.508548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.538 [2024-12-13 03:49:06.508589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.538 qpair failed and we were unable to recover it. 00:38:05.538 [2024-12-13 03:49:06.508833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.538 [2024-12-13 03:49:06.508880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.538 qpair failed and we were unable to recover it. 00:38:05.538 [2024-12-13 03:49:06.509105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.538 [2024-12-13 03:49:06.509151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.538 qpair failed and we were unable to recover it. 00:38:05.538 [2024-12-13 03:49:06.509370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.538 [2024-12-13 03:49:06.509388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.538 qpair failed and we were unable to recover it. 00:38:05.538 [2024-12-13 03:49:06.509599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.538 [2024-12-13 03:49:06.509613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.538 qpair failed and we were unable to recover it. 00:38:05.538 [2024-12-13 03:49:06.509797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.538 [2024-12-13 03:49:06.509839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.538 qpair failed and we were unable to recover it. 00:38:05.538 [2024-12-13 03:49:06.510042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.538 [2024-12-13 03:49:06.510084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.538 qpair failed and we were unable to recover it. 00:38:05.538 [2024-12-13 03:49:06.510311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.538 [2024-12-13 03:49:06.510360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.538 qpair failed and we were unable to recover it. 00:38:05.538 [2024-12-13 03:49:06.510508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.538 [2024-12-13 03:49:06.510522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.538 qpair failed and we were unable to recover it. 00:38:05.538 [2024-12-13 03:49:06.510660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.538 [2024-12-13 03:49:06.510673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.538 qpair failed and we were unable to recover it. 00:38:05.538 [2024-12-13 03:49:06.510898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.538 [2024-12-13 03:49:06.510912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.538 qpair failed and we were unable to recover it. 00:38:05.538 [2024-12-13 03:49:06.511084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.538 [2024-12-13 03:49:06.511098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.538 qpair failed and we were unable to recover it. 00:38:05.538 [2024-12-13 03:49:06.511256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.538 [2024-12-13 03:49:06.511289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.538 qpair failed and we were unable to recover it. 00:38:05.538 [2024-12-13 03:49:06.511439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.538 [2024-12-13 03:49:06.511479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.538 qpair failed and we were unable to recover it. 00:38:05.538 [2024-12-13 03:49:06.511708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.538 [2024-12-13 03:49:06.511748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.538 qpair failed and we were unable to recover it. 00:38:05.538 [2024-12-13 03:49:06.511887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.538 [2024-12-13 03:49:06.511982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.538 qpair failed and we were unable to recover it. 00:38:05.538 [2024-12-13 03:49:06.512135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.538 [2024-12-13 03:49:06.512177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.538 qpair failed and we were unable to recover it. 00:38:05.538 [2024-12-13 03:49:06.512417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.538 [2024-12-13 03:49:06.512521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.538 qpair failed and we were unable to recover it. 00:38:05.538 [2024-12-13 03:49:06.512757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.538 [2024-12-13 03:49:06.512804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.538 qpair failed and we were unable to recover it. 00:38:05.538 [2024-12-13 03:49:06.512955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.538 [2024-12-13 03:49:06.512998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.538 qpair failed and we were unable to recover it. 00:38:05.538 [2024-12-13 03:49:06.513256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.538 [2024-12-13 03:49:06.513298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.538 qpair failed and we were unable to recover it. 00:38:05.538 [2024-12-13 03:49:06.513445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.538 [2024-12-13 03:49:06.513486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.538 qpair failed and we were unable to recover it. 00:38:05.538 [2024-12-13 03:49:06.513688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.538 [2024-12-13 03:49:06.513731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.538 qpair failed and we were unable to recover it. 00:38:05.538 [2024-12-13 03:49:06.513943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.538 [2024-12-13 03:49:06.513990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.538 qpair failed and we were unable to recover it. 00:38:05.538 [2024-12-13 03:49:06.514267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.538 [2024-12-13 03:49:06.514309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.538 qpair failed and we were unable to recover it. 00:38:05.538 [2024-12-13 03:49:06.514568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.538 [2024-12-13 03:49:06.514588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.538 qpair failed and we were unable to recover it. 00:38:05.538 [2024-12-13 03:49:06.514774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.538 [2024-12-13 03:49:06.514791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.538 qpair failed and we were unable to recover it. 00:38:05.538 [2024-12-13 03:49:06.515031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.538 [2024-12-13 03:49:06.515075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.538 qpair failed and we were unable to recover it. 00:38:05.538 [2024-12-13 03:49:06.515238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.538 [2024-12-13 03:49:06.515279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.538 qpair failed and we were unable to recover it. 00:38:05.538 [2024-12-13 03:49:06.515419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.538 [2024-12-13 03:49:06.515461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.538 qpair failed and we were unable to recover it. 00:38:05.538 [2024-12-13 03:49:06.515652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.538 [2024-12-13 03:49:06.515707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.538 qpair failed and we were unable to recover it. 00:38:05.538 [2024-12-13 03:49:06.515975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.539 [2024-12-13 03:49:06.516061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.539 qpair failed and we were unable to recover it. 00:38:05.539 [2024-12-13 03:49:06.516225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.539 [2024-12-13 03:49:06.516239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.539 qpair failed and we were unable to recover it. 00:38:05.539 [2024-12-13 03:49:06.516471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.539 [2024-12-13 03:49:06.516514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.539 qpair failed and we were unable to recover it. 00:38:05.539 [2024-12-13 03:49:06.516658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.539 [2024-12-13 03:49:06.516699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.539 qpair failed and we were unable to recover it. 00:38:05.539 [2024-12-13 03:49:06.516993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.539 [2024-12-13 03:49:06.517049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.539 qpair failed and we were unable to recover it. 00:38:05.539 [2024-12-13 03:49:06.517151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.539 [2024-12-13 03:49:06.517172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.539 qpair failed and we were unable to recover it. 00:38:05.539 [2024-12-13 03:49:06.517267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.539 [2024-12-13 03:49:06.517288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.539 qpair failed and we were unable to recover it. 00:38:05.539 [2024-12-13 03:49:06.517398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.539 [2024-12-13 03:49:06.517413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.539 qpair failed and we were unable to recover it. 00:38:05.539 [2024-12-13 03:49:06.517547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.539 [2024-12-13 03:49:06.517561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.539 qpair failed and we were unable to recover it. 00:38:05.539 [2024-12-13 03:49:06.517706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.539 [2024-12-13 03:49:06.517720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.539 qpair failed and we were unable to recover it. 00:38:05.539 [2024-12-13 03:49:06.517800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.539 [2024-12-13 03:49:06.517813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.539 qpair failed and we were unable to recover it. 00:38:05.539 [2024-12-13 03:49:06.517964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.539 [2024-12-13 03:49:06.517979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.539 qpair failed and we were unable to recover it. 00:38:05.539 [2024-12-13 03:49:06.518121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.539 [2024-12-13 03:49:06.518134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.539 qpair failed and we were unable to recover it. 00:38:05.539 [2024-12-13 03:49:06.518215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.539 [2024-12-13 03:49:06.518228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.539 qpair failed and we were unable to recover it. 00:38:05.539 [2024-12-13 03:49:06.518377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.539 [2024-12-13 03:49:06.518390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.539 qpair failed and we were unable to recover it. 00:38:05.539 [2024-12-13 03:49:06.518522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.539 [2024-12-13 03:49:06.518536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.539 qpair failed and we were unable to recover it. 00:38:05.539 [2024-12-13 03:49:06.518610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.539 [2024-12-13 03:49:06.518623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.539 qpair failed and we were unable to recover it. 00:38:05.539 [2024-12-13 03:49:06.518827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.539 [2024-12-13 03:49:06.518841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.539 qpair failed and we were unable to recover it. 00:38:05.539 [2024-12-13 03:49:06.518989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.539 [2024-12-13 03:49:06.519003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.539 qpair failed and we were unable to recover it. 00:38:05.539 [2024-12-13 03:49:06.519137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.539 [2024-12-13 03:49:06.519150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.539 qpair failed and we were unable to recover it. 00:38:05.539 [2024-12-13 03:49:06.519225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.539 [2024-12-13 03:49:06.519238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.539 qpair failed and we were unable to recover it. 00:38:05.539 [2024-12-13 03:49:06.519390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.539 [2024-12-13 03:49:06.519409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.539 qpair failed and we were unable to recover it. 00:38:05.539 [2024-12-13 03:49:06.519555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.539 [2024-12-13 03:49:06.519570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.539 qpair failed and we were unable to recover it. 00:38:05.539 [2024-12-13 03:49:06.519636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.539 [2024-12-13 03:49:06.519649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.539 qpair failed and we were unable to recover it. 00:38:05.539 [2024-12-13 03:49:06.519810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.539 [2024-12-13 03:49:06.519828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.539 qpair failed and we were unable to recover it. 00:38:05.539 [2024-12-13 03:49:06.520005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.539 [2024-12-13 03:49:06.520020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.539 qpair failed and we were unable to recover it. 00:38:05.539 [2024-12-13 03:49:06.520194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.539 [2024-12-13 03:49:06.520219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.539 qpair failed and we were unable to recover it. 00:38:05.539 [2024-12-13 03:49:06.520380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.539 [2024-12-13 03:49:06.520403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.539 qpair failed and we were unable to recover it. 00:38:05.539 [2024-12-13 03:49:06.520570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.539 [2024-12-13 03:49:06.520591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.539 qpair failed and we were unable to recover it. 00:38:05.539 [2024-12-13 03:49:06.520687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.539 [2024-12-13 03:49:06.520706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.539 qpair failed and we were unable to recover it. 00:38:05.539 [2024-12-13 03:49:06.520806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.539 [2024-12-13 03:49:06.520827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.539 qpair failed and we were unable to recover it. 00:38:05.539 [2024-12-13 03:49:06.520999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.539 [2024-12-13 03:49:06.521047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.539 qpair failed and we were unable to recover it. 00:38:05.539 [2024-12-13 03:49:06.521295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.540 [2024-12-13 03:49:06.521341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.540 qpair failed and we were unable to recover it. 00:38:05.540 [2024-12-13 03:49:06.521542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.540 [2024-12-13 03:49:06.521584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.540 qpair failed and we were unable to recover it. 00:38:05.540 [2024-12-13 03:49:06.521866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.540 [2024-12-13 03:49:06.521908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.540 qpair failed and we were unable to recover it. 00:38:05.540 [2024-12-13 03:49:06.522083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.540 [2024-12-13 03:49:06.522096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.540 qpair failed and we were unable to recover it. 00:38:05.540 [2024-12-13 03:49:06.522243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.540 [2024-12-13 03:49:06.522258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.540 qpair failed and we were unable to recover it. 00:38:05.540 [2024-12-13 03:49:06.522477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.540 [2024-12-13 03:49:06.522517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.540 qpair failed and we were unable to recover it. 00:38:05.540 [2024-12-13 03:49:06.522671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.540 [2024-12-13 03:49:06.522714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.540 qpair failed and we were unable to recover it. 00:38:05.540 [2024-12-13 03:49:06.522952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.540 [2024-12-13 03:49:06.523001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.540 qpair failed and we were unable to recover it. 00:38:05.540 [2024-12-13 03:49:06.523263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.540 [2024-12-13 03:49:06.523304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.540 qpair failed and we were unable to recover it. 00:38:05.540 [2024-12-13 03:49:06.523508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.540 [2024-12-13 03:49:06.523521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.540 qpair failed and we were unable to recover it. 00:38:05.540 [2024-12-13 03:49:06.523680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.540 [2024-12-13 03:49:06.523694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.540 qpair failed and we were unable to recover it. 00:38:05.540 [2024-12-13 03:49:06.523910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.540 [2024-12-13 03:49:06.523974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.540 qpair failed and we were unable to recover it. 00:38:05.540 [2024-12-13 03:49:06.524183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.540 [2024-12-13 03:49:06.524224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.540 qpair failed and we were unable to recover it. 00:38:05.540 [2024-12-13 03:49:06.524445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.540 [2024-12-13 03:49:06.524492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.540 qpair failed and we were unable to recover it. 00:38:05.540 [2024-12-13 03:49:06.524657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.540 [2024-12-13 03:49:06.524670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.540 qpair failed and we were unable to recover it. 00:38:05.540 [2024-12-13 03:49:06.524765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.540 [2024-12-13 03:49:06.524778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.540 qpair failed and we were unable to recover it. 00:38:05.540 [2024-12-13 03:49:06.524868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.540 [2024-12-13 03:49:06.524880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.540 qpair failed and we were unable to recover it. 00:38:05.540 [2024-12-13 03:49:06.525116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.540 [2024-12-13 03:49:06.525160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.540 qpair failed and we were unable to recover it. 00:38:05.540 [2024-12-13 03:49:06.525411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.540 [2024-12-13 03:49:06.525455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.540 qpair failed and we were unable to recover it. 00:38:05.540 [2024-12-13 03:49:06.525782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.540 [2024-12-13 03:49:06.525822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.540 qpair failed and we were unable to recover it. 00:38:05.540 [2024-12-13 03:49:06.525960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.540 [2024-12-13 03:49:06.526003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.540 qpair failed and we were unable to recover it. 00:38:05.540 [2024-12-13 03:49:06.526216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.540 [2024-12-13 03:49:06.526257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.540 qpair failed and we were unable to recover it. 00:38:05.540 [2024-12-13 03:49:06.526503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.540 [2024-12-13 03:49:06.526527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.540 qpair failed and we were unable to recover it. 00:38:05.540 [2024-12-13 03:49:06.526616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.540 [2024-12-13 03:49:06.526629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.540 qpair failed and we were unable to recover it. 00:38:05.540 [2024-12-13 03:49:06.526780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.540 [2024-12-13 03:49:06.526794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.540 qpair failed and we were unable to recover it. 00:38:05.540 [2024-12-13 03:49:06.526940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.540 [2024-12-13 03:49:06.526953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.540 qpair failed and we were unable to recover it. 00:38:05.540 [2024-12-13 03:49:06.527097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.540 [2024-12-13 03:49:06.527111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.540 qpair failed and we were unable to recover it. 00:38:05.540 [2024-12-13 03:49:06.527313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.540 [2024-12-13 03:49:06.527327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.540 qpair failed and we were unable to recover it. 00:38:05.540 [2024-12-13 03:49:06.527418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.540 [2024-12-13 03:49:06.527430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.540 qpair failed and we were unable to recover it. 00:38:05.540 [2024-12-13 03:49:06.527569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.541 [2024-12-13 03:49:06.527582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.541 qpair failed and we were unable to recover it. 00:38:05.541 [2024-12-13 03:49:06.527797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.541 [2024-12-13 03:49:06.527838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.541 qpair failed and we were unable to recover it. 00:38:05.541 [2024-12-13 03:49:06.527988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.541 [2024-12-13 03:49:06.528030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.541 qpair failed and we were unable to recover it. 00:38:05.541 [2024-12-13 03:49:06.528337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.541 [2024-12-13 03:49:06.528369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.541 qpair failed and we were unable to recover it. 00:38:05.541 [2024-12-13 03:49:06.528502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.541 [2024-12-13 03:49:06.528515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.541 qpair failed and we were unable to recover it. 00:38:05.541 [2024-12-13 03:49:06.528747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.541 [2024-12-13 03:49:06.528774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.541 qpair failed and we were unable to recover it. 00:38:05.541 [2024-12-13 03:49:06.528924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.541 [2024-12-13 03:49:06.528970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.541 qpair failed and we were unable to recover it. 00:38:05.541 [2024-12-13 03:49:06.529087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.541 [2024-12-13 03:49:06.529110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.541 qpair failed and we were unable to recover it. 00:38:05.541 [2024-12-13 03:49:06.529300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.541 [2024-12-13 03:49:06.529316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.541 qpair failed and we were unable to recover it. 00:38:05.541 [2024-12-13 03:49:06.529423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.541 [2024-12-13 03:49:06.529437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.541 qpair failed and we were unable to recover it. 00:38:05.541 [2024-12-13 03:49:06.529533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.541 [2024-12-13 03:49:06.529547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.541 qpair failed and we were unable to recover it. 00:38:05.541 [2024-12-13 03:49:06.529638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.541 [2024-12-13 03:49:06.529652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.541 qpair failed and we were unable to recover it. 00:38:05.541 [2024-12-13 03:49:06.529797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.541 [2024-12-13 03:49:06.529812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.541 qpair failed and we were unable to recover it. 00:38:05.541 [2024-12-13 03:49:06.529901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.541 [2024-12-13 03:49:06.529914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.541 qpair failed and we were unable to recover it. 00:38:05.541 [2024-12-13 03:49:06.530006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.541 [2024-12-13 03:49:06.530020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.541 qpair failed and we were unable to recover it. 00:38:05.541 [2024-12-13 03:49:06.530159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.541 [2024-12-13 03:49:06.530174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.541 qpair failed and we were unable to recover it. 00:38:05.541 [2024-12-13 03:49:06.530240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.541 [2024-12-13 03:49:06.530252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.541 qpair failed and we were unable to recover it. 00:38:05.541 [2024-12-13 03:49:06.530389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.541 [2024-12-13 03:49:06.530401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.541 qpair failed and we were unable to recover it. 00:38:05.541 [2024-12-13 03:49:06.530489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.541 [2024-12-13 03:49:06.530504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.541 qpair failed and we were unable to recover it. 00:38:05.541 [2024-12-13 03:49:06.530644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.541 [2024-12-13 03:49:06.530658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.541 qpair failed and we were unable to recover it. 00:38:05.541 [2024-12-13 03:49:06.530797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.541 [2024-12-13 03:49:06.530840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.541 qpair failed and we were unable to recover it. 00:38:05.541 [2024-12-13 03:49:06.531052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.541 [2024-12-13 03:49:06.531096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.541 qpair failed and we were unable to recover it. 00:38:05.541 [2024-12-13 03:49:06.531432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.541 [2024-12-13 03:49:06.531485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.541 qpair failed and we were unable to recover it. 00:38:05.541 [2024-12-13 03:49:06.531707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.541 [2024-12-13 03:49:06.531754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.541 qpair failed and we were unable to recover it. 00:38:05.541 [2024-12-13 03:49:06.532035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.541 [2024-12-13 03:49:06.532083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.541 qpair failed and we were unable to recover it. 00:38:05.541 [2024-12-13 03:49:06.532300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.541 [2024-12-13 03:49:06.532343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.541 qpair failed and we were unable to recover it. 00:38:05.541 [2024-12-13 03:49:06.532496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.541 [2024-12-13 03:49:06.532539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.541 qpair failed and we were unable to recover it. 00:38:05.541 [2024-12-13 03:49:06.532675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.541 [2024-12-13 03:49:06.532715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.541 qpair failed and we were unable to recover it. 00:38:05.541 [2024-12-13 03:49:06.532978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.541 [2024-12-13 03:49:06.533023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.541 qpair failed and we were unable to recover it. 00:38:05.541 [2024-12-13 03:49:06.533282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.541 [2024-12-13 03:49:06.533324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.541 qpair failed and we were unable to recover it. 00:38:05.541 [2024-12-13 03:49:06.533537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.541 [2024-12-13 03:49:06.533584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.541 qpair failed and we were unable to recover it. 00:38:05.541 [2024-12-13 03:49:06.533812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.541 [2024-12-13 03:49:06.533859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.541 qpair failed and we were unable to recover it. 00:38:05.541 [2024-12-13 03:49:06.534090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.541 [2024-12-13 03:49:06.534113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.541 qpair failed and we were unable to recover it. 00:38:05.541 [2024-12-13 03:49:06.534285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.541 [2024-12-13 03:49:06.534312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.541 qpair failed and we were unable to recover it. 00:38:05.541 [2024-12-13 03:49:06.534491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.541 [2024-12-13 03:49:06.534532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.541 qpair failed and we were unable to recover it. 00:38:05.541 [2024-12-13 03:49:06.534804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.541 [2024-12-13 03:49:06.534846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.541 qpair failed and we were unable to recover it. 00:38:05.541 [2024-12-13 03:49:06.535065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.535089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.535192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.535214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.535392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.535414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.535564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.535586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.535754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.535776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.535855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.535870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.536078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.536092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.536307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.536330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.536469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.536482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.536709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.536755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.537000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.537045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.537247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.537267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.537379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.537401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.537577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.537599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.537773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.537794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.538003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.538020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.538171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.538185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.538269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.538282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.538395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.538410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.538622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.538637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.538743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.538758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.538856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.538874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.539051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.539066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.539222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.539237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.539451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.539465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.539534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.539546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.539620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.539632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.539788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.539802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.539955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.539969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.540050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.540064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.540155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.540168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.540268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.540281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.540426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.540440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.540510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.540523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.540590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.540605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.540693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.540706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.540813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.540826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.540958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.540971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-12-13 03:49:06.541171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.542 [2024-12-13 03:49:06.541185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.541328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.541382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.541577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.541618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.541774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.541818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.542114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.542163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.542490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.542535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.542701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.542718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.542790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.542803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.543030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.543044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.543196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.543211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.543437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.543477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.543615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.543663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.543891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.543952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.544253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.544267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.544407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.544422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.544595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.544609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.544675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.544689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.544852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.544865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.544976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.544990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.545139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.545153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.545232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.545245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.545326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.545338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.545484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.545499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.545568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.545581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.545676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.545693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.545909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.545927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.546076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.546090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.546227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.546240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.546403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.546417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.546507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.546520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.546675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.546689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.546777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.546792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.546995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.547010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.547184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.547198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.547332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.547345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.547490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.547505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.547603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.547616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.547712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.547727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-12-13 03:49:06.547863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.543 [2024-12-13 03:49:06.547877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.544 [2024-12-13 03:49:06.547961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.544 [2024-12-13 03:49:06.547974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-12-13 03:49:06.548064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.544 [2024-12-13 03:49:06.548078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-12-13 03:49:06.548224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.544 [2024-12-13 03:49:06.548237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-12-13 03:49:06.548318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.544 [2024-12-13 03:49:06.548330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-12-13 03:49:06.548499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.544 [2024-12-13 03:49:06.548549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-12-13 03:49:06.548753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.544 [2024-12-13 03:49:06.548797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-12-13 03:49:06.548942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.544 [2024-12-13 03:49:06.548989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-12-13 03:49:06.549149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.544 [2024-12-13 03:49:06.549195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-12-13 03:49:06.549348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.544 [2024-12-13 03:49:06.549390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-12-13 03:49:06.549543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.544 [2024-12-13 03:49:06.549584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-12-13 03:49:06.549789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.544 [2024-12-13 03:49:06.549830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-12-13 03:49:06.550103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.544 [2024-12-13 03:49:06.550146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-12-13 03:49:06.550348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.544 [2024-12-13 03:49:06.550397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-12-13 03:49:06.550529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.544 [2024-12-13 03:49:06.550570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-12-13 03:49:06.550827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.544 [2024-12-13 03:49:06.550870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-12-13 03:49:06.551154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.544 [2024-12-13 03:49:06.551198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-12-13 03:49:06.551470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.544 [2024-12-13 03:49:06.551491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-12-13 03:49:06.551715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.544 [2024-12-13 03:49:06.551736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-12-13 03:49:06.551947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.544 [2024-12-13 03:49:06.551963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-12-13 03:49:06.552127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.544 [2024-12-13 03:49:06.552141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-12-13 03:49:06.552357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.544 [2024-12-13 03:49:06.552398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-12-13 03:49:06.552649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.544 [2024-12-13 03:49:06.552698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-12-13 03:49:06.552931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.544 [2024-12-13 03:49:06.552978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-12-13 03:49:06.553173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.544 [2024-12-13 03:49:06.553243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-12-13 03:49:06.553537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.544 [2024-12-13 03:49:06.553581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-12-13 03:49:06.553727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.544 [2024-12-13 03:49:06.553768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-12-13 03:49:06.553984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.544 [2024-12-13 03:49:06.554037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-12-13 03:49:06.554210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.544 [2024-12-13 03:49:06.554224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-12-13 03:49:06.554313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.544 [2024-12-13 03:49:06.554327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-12-13 03:49:06.554405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.544 [2024-12-13 03:49:06.554417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-12-13 03:49:06.554551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.544 [2024-12-13 03:49:06.554565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-12-13 03:49:06.554706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.544 [2024-12-13 03:49:06.554719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-12-13 03:49:06.554953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.544 [2024-12-13 03:49:06.554996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-12-13 03:49:06.555153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.544 [2024-12-13 03:49:06.555194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-12-13 03:49:06.555355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.544 [2024-12-13 03:49:06.555397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.545 qpair failed and we were unable to recover it. 00:38:05.545 [2024-12-13 03:49:06.555669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.545 [2024-12-13 03:49:06.555692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.545 qpair failed and we were unable to recover it. 00:38:05.545 [2024-12-13 03:49:06.555896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.545 [2024-12-13 03:49:06.555911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.545 qpair failed and we were unable to recover it. 00:38:05.545 [2024-12-13 03:49:06.556065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.545 [2024-12-13 03:49:06.556079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.545 qpair failed and we were unable to recover it. 00:38:05.545 [2024-12-13 03:49:06.556296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.545 [2024-12-13 03:49:06.556339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.545 qpair failed and we were unable to recover it. 00:38:05.545 [2024-12-13 03:49:06.556631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.545 [2024-12-13 03:49:06.556673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.545 qpair failed and we were unable to recover it. 00:38:05.545 [2024-12-13 03:49:06.556878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.545 [2024-12-13 03:49:06.556938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.545 qpair failed and we were unable to recover it. 00:38:05.545 [2024-12-13 03:49:06.557246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.545 [2024-12-13 03:49:06.557303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.545 qpair failed and we were unable to recover it. 00:38:05.545 [2024-12-13 03:49:06.557430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.545 [2024-12-13 03:49:06.557460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.545 qpair failed and we were unable to recover it. 00:38:05.545 [2024-12-13 03:49:06.557557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.545 [2024-12-13 03:49:06.557573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.545 qpair failed and we were unable to recover it. 00:38:05.545 [2024-12-13 03:49:06.557773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.545 [2024-12-13 03:49:06.557787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.545 qpair failed and we were unable to recover it. 00:38:05.545 [2024-12-13 03:49:06.557946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.545 [2024-12-13 03:49:06.557960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.545 qpair failed and we were unable to recover it. 00:38:05.545 [2024-12-13 03:49:06.558161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.545 [2024-12-13 03:49:06.558175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.545 qpair failed and we were unable to recover it. 00:38:05.545 [2024-12-13 03:49:06.558250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.545 [2024-12-13 03:49:06.558263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.545 qpair failed and we were unable to recover it. 00:38:05.545 [2024-12-13 03:49:06.558528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.545 [2024-12-13 03:49:06.558542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.545 qpair failed and we were unable to recover it. 00:38:05.545 [2024-12-13 03:49:06.558694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.545 [2024-12-13 03:49:06.558707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.545 qpair failed and we were unable to recover it. 00:38:05.545 [2024-12-13 03:49:06.558864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.545 [2024-12-13 03:49:06.558878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.545 qpair failed and we were unable to recover it. 00:38:05.545 [2024-12-13 03:49:06.559048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.545 [2024-12-13 03:49:06.559096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.545 qpair failed and we were unable to recover it. 00:38:05.545 [2024-12-13 03:49:06.559267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.545 [2024-12-13 03:49:06.559316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.545 qpair failed and we were unable to recover it. 00:38:05.545 [2024-12-13 03:49:06.559627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.545 [2024-12-13 03:49:06.559668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.545 qpair failed and we were unable to recover it. 00:38:05.545 [2024-12-13 03:49:06.559812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.545 [2024-12-13 03:49:06.559853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.545 qpair failed and we were unable to recover it. 00:38:05.545 [2024-12-13 03:49:06.560023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.545 [2024-12-13 03:49:06.560068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.545 qpair failed and we were unable to recover it. 00:38:05.545 [2024-12-13 03:49:06.560214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.545 [2024-12-13 03:49:06.560266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.545 qpair failed and we were unable to recover it. 00:38:05.545 [2024-12-13 03:49:06.560348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.545 [2024-12-13 03:49:06.560360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.545 qpair failed and we were unable to recover it. 00:38:05.545 [2024-12-13 03:49:06.560584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.545 [2024-12-13 03:49:06.560598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.545 qpair failed and we were unable to recover it. 00:38:05.545 [2024-12-13 03:49:06.560680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.545 [2024-12-13 03:49:06.560693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.545 qpair failed and we were unable to recover it. 00:38:05.545 [2024-12-13 03:49:06.560842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.545 [2024-12-13 03:49:06.560856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.545 qpair failed and we were unable to recover it. 00:38:05.545 [2024-12-13 03:49:06.561077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.545 [2024-12-13 03:49:06.561118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.545 qpair failed and we were unable to recover it. 00:38:05.545 [2024-12-13 03:49:06.561323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.545 [2024-12-13 03:49:06.561364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.545 qpair failed and we were unable to recover it. 00:38:05.545 [2024-12-13 03:49:06.561492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.545 [2024-12-13 03:49:06.561506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.545 qpair failed and we were unable to recover it. 00:38:05.545 [2024-12-13 03:49:06.561602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.545 [2024-12-13 03:49:06.561614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.545 qpair failed and we were unable to recover it. 00:38:05.545 [2024-12-13 03:49:06.561703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.545 [2024-12-13 03:49:06.561715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.545 qpair failed and we were unable to recover it. 00:38:05.545 [2024-12-13 03:49:06.561891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.545 [2024-12-13 03:49:06.561905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.545 qpair failed and we were unable to recover it. 00:38:05.545 [2024-12-13 03:49:06.562155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.545 [2024-12-13 03:49:06.562168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.545 qpair failed and we were unable to recover it. 00:38:05.545 [2024-12-13 03:49:06.562246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.545 [2024-12-13 03:49:06.562259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.545 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.562334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.562346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.562409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.562421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.562556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.562570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.562725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.562739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.562843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.562855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.562929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.562943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.563159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.563173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.563270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.563283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.563459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.563473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.563536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.563549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.563684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.563697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.563765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.563777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.563872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.563884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.564031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.564044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.564182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.564196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.564434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.564476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.564621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.564663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.564889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.564943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.565065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.565133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.565399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.565442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.565644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.565657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.565857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.565870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.566050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.566063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.566142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.566157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.566236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.566249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.566458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.566471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.566556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.566569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.566651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.566664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.566749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.566762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.566855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.566867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.567005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.567019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.567167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.567180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.567267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.567279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.567364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.567376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.567602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.567616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.567684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.567697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.567791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.567803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.567960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.546 [2024-12-13 03:49:06.567980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.546 qpair failed and we were unable to recover it. 00:38:05.546 [2024-12-13 03:49:06.568068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.568093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.568164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.568177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.568334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.568347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.568412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.568426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.568507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.568519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.568654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.568668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.568764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.568778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.568912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.568930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.569195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.569235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.569390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.569434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.569560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.569601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.569886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.569936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.570229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.570270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.570586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.570628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.570854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.570897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.571077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.571120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.571429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.571470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.571676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.571717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.571868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.571910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.572170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.572184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.572272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.572285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.572393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.572408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.572611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.572624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.572778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.572792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.572874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.572886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.573030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.573052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.573154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.573167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.573399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.573413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.573553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.573567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.573696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.573709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.573847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.573862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.574064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.574079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.574228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.574276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.574470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.574512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.574731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.574772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.574980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.575023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.575226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.575241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.575448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.575489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.547 [2024-12-13 03:49:06.575716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.547 [2024-12-13 03:49:06.575757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.547 qpair failed and we were unable to recover it. 00:38:05.548 [2024-12-13 03:49:06.575981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.548 [2024-12-13 03:49:06.576024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.548 qpair failed and we were unable to recover it. 00:38:05.548 [2024-12-13 03:49:06.576173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.548 [2024-12-13 03:49:06.576215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.548 qpair failed and we were unable to recover it. 00:38:05.548 [2024-12-13 03:49:06.576454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.548 [2024-12-13 03:49:06.576497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.548 qpair failed and we were unable to recover it. 00:38:05.548 [2024-12-13 03:49:06.576698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.548 [2024-12-13 03:49:06.576740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.548 qpair failed and we were unable to recover it. 00:38:05.548 [2024-12-13 03:49:06.576971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.548 [2024-12-13 03:49:06.577017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.548 qpair failed and we were unable to recover it. 00:38:05.548 [2024-12-13 03:49:06.577158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.548 [2024-12-13 03:49:06.577216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.548 qpair failed and we were unable to recover it. 00:38:05.548 [2024-12-13 03:49:06.577387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.548 [2024-12-13 03:49:06.577400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.548 qpair failed and we were unable to recover it. 00:38:05.548 [2024-12-13 03:49:06.577484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.548 [2024-12-13 03:49:06.577497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.548 qpair failed and we were unable to recover it. 00:38:05.548 [2024-12-13 03:49:06.577720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.548 [2024-12-13 03:49:06.577733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.548 qpair failed and we were unable to recover it. 00:38:05.548 [2024-12-13 03:49:06.577812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.548 [2024-12-13 03:49:06.577825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.548 qpair failed and we were unable to recover it. 00:38:05.548 [2024-12-13 03:49:06.577909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.548 [2024-12-13 03:49:06.577925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.548 qpair failed and we were unable to recover it. 00:38:05.548 [2024-12-13 03:49:06.578084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.548 [2024-12-13 03:49:06.578098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.548 qpair failed and we were unable to recover it. 00:38:05.548 [2024-12-13 03:49:06.578193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.548 [2024-12-13 03:49:06.578210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.548 qpair failed and we were unable to recover it. 00:38:05.548 [2024-12-13 03:49:06.578362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.548 [2024-12-13 03:49:06.578375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.548 qpair failed and we were unable to recover it. 00:38:05.548 [2024-12-13 03:49:06.578462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.548 [2024-12-13 03:49:06.578476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.548 qpair failed and we were unable to recover it. 00:38:05.548 [2024-12-13 03:49:06.578611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.548 [2024-12-13 03:49:06.578624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.548 qpair failed and we were unable to recover it. 00:38:05.548 [2024-12-13 03:49:06.578693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.548 [2024-12-13 03:49:06.578706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.548 qpair failed and we were unable to recover it. 00:38:05.548 [2024-12-13 03:49:06.578856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.548 [2024-12-13 03:49:06.578869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.548 qpair failed and we were unable to recover it. 00:38:05.548 [2024-12-13 03:49:06.578942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.548 [2024-12-13 03:49:06.578955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.548 qpair failed and we were unable to recover it. 00:38:05.548 [2024-12-13 03:49:06.579033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.548 [2024-12-13 03:49:06.579045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.548 qpair failed and we were unable to recover it. 00:38:05.548 [2024-12-13 03:49:06.579119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.548 [2024-12-13 03:49:06.579132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.548 qpair failed and we were unable to recover it. 00:38:05.548 [2024-12-13 03:49:06.579223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.548 [2024-12-13 03:49:06.579236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.548 qpair failed and we were unable to recover it. 00:38:05.548 [2024-12-13 03:49:06.579307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.548 [2024-12-13 03:49:06.579320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.548 qpair failed and we were unable to recover it. 00:38:05.548 [2024-12-13 03:49:06.579402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.548 [2024-12-13 03:49:06.579415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.548 qpair failed and we were unable to recover it. 00:38:05.548 [2024-12-13 03:49:06.579482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.548 [2024-12-13 03:49:06.579495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.548 qpair failed and we were unable to recover it. 00:38:05.548 [2024-12-13 03:49:06.579645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.548 [2024-12-13 03:49:06.579660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.548 qpair failed and we were unable to recover it. 00:38:05.548 [2024-12-13 03:49:06.579748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.548 [2024-12-13 03:49:06.579765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.548 qpair failed and we were unable to recover it. 00:38:05.548 [2024-12-13 03:49:06.579839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.548 [2024-12-13 03:49:06.579851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.548 qpair failed and we were unable to recover it. 00:38:05.548 [2024-12-13 03:49:06.580000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.548 [2024-12-13 03:49:06.580013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.548 qpair failed and we were unable to recover it. 00:38:05.548 [2024-12-13 03:49:06.580082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.548 [2024-12-13 03:49:06.580094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.548 qpair failed and we were unable to recover it. 00:38:05.548 [2024-12-13 03:49:06.580261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.548 [2024-12-13 03:49:06.580274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.548 qpair failed and we were unable to recover it. 00:38:05.548 [2024-12-13 03:49:06.580420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.548 [2024-12-13 03:49:06.580437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.548 qpair failed and we were unable to recover it. 00:38:05.548 [2024-12-13 03:49:06.580508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.548 [2024-12-13 03:49:06.580520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.548 qpair failed and we were unable to recover it. 00:38:05.548 [2024-12-13 03:49:06.580615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.548 [2024-12-13 03:49:06.580628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.548 qpair failed and we were unable to recover it. 00:38:05.548 [2024-12-13 03:49:06.580833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.548 [2024-12-13 03:49:06.580846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.548 qpair failed and we were unable to recover it. 00:38:05.548 [2024-12-13 03:49:06.581002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.548 [2024-12-13 03:49:06.581016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.548 qpair failed and we were unable to recover it. 00:38:05.548 [2024-12-13 03:49:06.581163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.549 [2024-12-13 03:49:06.581176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.549 qpair failed and we were unable to recover it. 00:38:05.549 [2024-12-13 03:49:06.581266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.549 [2024-12-13 03:49:06.581279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.549 qpair failed and we were unable to recover it. 00:38:05.549 [2024-12-13 03:49:06.581426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.549 [2024-12-13 03:49:06.581440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.549 qpair failed and we were unable to recover it. 00:38:05.549 [2024-12-13 03:49:06.581531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.549 [2024-12-13 03:49:06.581543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.549 qpair failed and we were unable to recover it. 00:38:05.549 [2024-12-13 03:49:06.581755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.549 [2024-12-13 03:49:06.581768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.549 qpair failed and we were unable to recover it. 00:38:05.549 [2024-12-13 03:49:06.581861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.549 [2024-12-13 03:49:06.581874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.549 qpair failed and we were unable to recover it. 00:38:05.549 [2024-12-13 03:49:06.582007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.549 [2024-12-13 03:49:06.582021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.549 qpair failed and we were unable to recover it. 00:38:05.549 [2024-12-13 03:49:06.582094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.549 [2024-12-13 03:49:06.582107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.549 qpair failed and we were unable to recover it. 00:38:05.549 [2024-12-13 03:49:06.582245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.549 [2024-12-13 03:49:06.582259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.549 qpair failed and we were unable to recover it. 00:38:05.549 [2024-12-13 03:49:06.582401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.549 [2024-12-13 03:49:06.582414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.549 qpair failed and we were unable to recover it. 00:38:05.549 [2024-12-13 03:49:06.582497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.549 [2024-12-13 03:49:06.582509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.549 qpair failed and we were unable to recover it. 00:38:05.549 [2024-12-13 03:49:06.582720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.549 [2024-12-13 03:49:06.582734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.549 qpair failed and we were unable to recover it. 00:38:05.549 [2024-12-13 03:49:06.582817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.549 [2024-12-13 03:49:06.582830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.549 qpair failed and we were unable to recover it. 00:38:05.549 [2024-12-13 03:49:06.582969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.549 [2024-12-13 03:49:06.582982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.549 qpair failed and we were unable to recover it. 00:38:05.549 [2024-12-13 03:49:06.583050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.549 [2024-12-13 03:49:06.583063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.549 qpair failed and we were unable to recover it. 00:38:05.549 [2024-12-13 03:49:06.583130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.549 [2024-12-13 03:49:06.583141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.549 qpair failed and we were unable to recover it. 00:38:05.549 [2024-12-13 03:49:06.583275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.549 [2024-12-13 03:49:06.583289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.549 qpair failed and we were unable to recover it. 00:38:05.549 [2024-12-13 03:49:06.583432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.549 [2024-12-13 03:49:06.583446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.549 qpair failed and we were unable to recover it. 00:38:05.549 [2024-12-13 03:49:06.583596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.549 [2024-12-13 03:49:06.583610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.549 qpair failed and we were unable to recover it. 00:38:05.549 [2024-12-13 03:49:06.583746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.549 [2024-12-13 03:49:06.583760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.549 qpair failed and we were unable to recover it. 00:38:05.549 [2024-12-13 03:49:06.583834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.549 [2024-12-13 03:49:06.583846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.549 qpair failed and we were unable to recover it. 00:38:05.549 [2024-12-13 03:49:06.583982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.549 [2024-12-13 03:49:06.583996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.549 qpair failed and we were unable to recover it. 00:38:05.549 [2024-12-13 03:49:06.584077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.549 [2024-12-13 03:49:06.584089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.549 qpair failed and we were unable to recover it. 00:38:05.549 [2024-12-13 03:49:06.584183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.549 [2024-12-13 03:49:06.584197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.549 qpair failed and we were unable to recover it. 00:38:05.549 [2024-12-13 03:49:06.584275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.549 [2024-12-13 03:49:06.584288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.549 qpair failed and we were unable to recover it. 00:38:05.549 [2024-12-13 03:49:06.584361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.549 [2024-12-13 03:49:06.584374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.549 qpair failed and we were unable to recover it. 00:38:05.549 [2024-12-13 03:49:06.584440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.549 [2024-12-13 03:49:06.584453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.549 qpair failed and we were unable to recover it. 00:38:05.549 [2024-12-13 03:49:06.584523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.549 [2024-12-13 03:49:06.584536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.549 qpair failed and we were unable to recover it. 00:38:05.549 [2024-12-13 03:49:06.584618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.549 [2024-12-13 03:49:06.584630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.549 qpair failed and we were unable to recover it. 00:38:05.549 [2024-12-13 03:49:06.584709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.549 [2024-12-13 03:49:06.584721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.549 qpair failed and we were unable to recover it. 00:38:05.549 [2024-12-13 03:49:06.584865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.549 [2024-12-13 03:49:06.584880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.549 qpair failed and we were unable to recover it. 00:38:05.549 [2024-12-13 03:49:06.585023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.549 [2024-12-13 03:49:06.585038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.549 qpair failed and we were unable to recover it. 00:38:05.549 [2024-12-13 03:49:06.585145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.549 [2024-12-13 03:49:06.585159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.549 qpair failed and we were unable to recover it. 00:38:05.549 [2024-12-13 03:49:06.585252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.549 [2024-12-13 03:49:06.585273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.549 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.585364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.585377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.585516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.585530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.585683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.585696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.585768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.585780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.585983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.585997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.586066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.586078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.586164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.586176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.586325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.586339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.586439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.586453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.586533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.586547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.586624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.586636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.586702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.586715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.586905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.586927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.587074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.587088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.587182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.587195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.587393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.587406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.587551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.587565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.587642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.587654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.587745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.587758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.587837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.587851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.587941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.587954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.588090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.588103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.588275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.588319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.588537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.588593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.588782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.588866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.589269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.589341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.589506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.589562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.589744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.589766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.589869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.589890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.590006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.590022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.590096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.590108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.590198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.590211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.590374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.590389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.590484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.590498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.590586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.590601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.590744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.590757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.590862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.590885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.550 [2024-12-13 03:49:06.590962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.550 [2024-12-13 03:49:06.590975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.550 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.591067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.591080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.591178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.591192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.591265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.591279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.591454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.591467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.591562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.591576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.591666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.591679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.591750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.591763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.591861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.591875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.591949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.591962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.592041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.592054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.592138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.592153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.592329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.592342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.592495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.592509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.592660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.592673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.592906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.592974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.593105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.593147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.593277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.593347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.593532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.593545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.593756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.593797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.593940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.593984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.594123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.594164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.594294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.594337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.594604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.594662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.594959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.595003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.595230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.595275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.595445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.595471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.595588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.595610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.595706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.595728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.595970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.595992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.596226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.596270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.596548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.596589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.596717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.596759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.597024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.597069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.597298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.597340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.597549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.597593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.597880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.597937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.598088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.598131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.551 [2024-12-13 03:49:06.598384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.551 [2024-12-13 03:49:06.598434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.551 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.598596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.598621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.598839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.598860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.599014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.599036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.599201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.599243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.599439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.599480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.599620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.599662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.599890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.599942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.600144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.600186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.600444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.600486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.600781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.600802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.601045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.601079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.601306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.601327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.601472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.601493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.601730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.601771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.601980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.602023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.602239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.602281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.602415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.602436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.602591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.602608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.602866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.602880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.603109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.603122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.603258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.603272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.603410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.603424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.603523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.603536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.603604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.603625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.603703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.603716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.603815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.603827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.604008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.604058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.604381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.604467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.604794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.604841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.605022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.605044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.605144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.605161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.605259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.605272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.605424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.605438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.605584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.605598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.605755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.605768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.605847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.605859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.605926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.605938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.552 [2024-12-13 03:49:06.606090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.552 [2024-12-13 03:49:06.606104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.552 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.606263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.606279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.606360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.606371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.606577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.606593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.606678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.606691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.606835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.606849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.606930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.606944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.607017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.607030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.607110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.607122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.607193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.607206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.607508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.607551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.607833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.607877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.608233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.608287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.608436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.608457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.608688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.608731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.608941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.608986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.609268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.609325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.609442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.609458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.609627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.609640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.609849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.609864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.610033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.610047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.610123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.610137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.610282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.610296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.610450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.610464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.610554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.610566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.610730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.610743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.610822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.610835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.610906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.610924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.611060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.611072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.611164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.611177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.611429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.611453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.611566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.611597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.611758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.611787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.611951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.611967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.612179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.612229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.612433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.612475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.612810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.612852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.612986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.613029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.613188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.613236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.613500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.613543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.613704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.553 [2024-12-13 03:49:06.613717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.553 qpair failed and we were unable to recover it. 00:38:05.553 [2024-12-13 03:49:06.613800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.554 [2024-12-13 03:49:06.613813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.554 qpair failed and we were unable to recover it. 00:38:05.554 [2024-12-13 03:49:06.613965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.554 [2024-12-13 03:49:06.613979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.554 qpair failed and we were unable to recover it. 00:38:05.554 [2024-12-13 03:49:06.614208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.554 [2024-12-13 03:49:06.614256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.554 qpair failed and we were unable to recover it. 00:38:05.554 [2024-12-13 03:49:06.614412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.554 [2024-12-13 03:49:06.614454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.554 qpair failed and we were unable to recover it. 00:38:05.554 [2024-12-13 03:49:06.614583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.554 [2024-12-13 03:49:06.614626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.554 qpair failed and we were unable to recover it. 00:38:05.554 [2024-12-13 03:49:06.614883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.554 [2024-12-13 03:49:06.614934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.554 qpair failed and we were unable to recover it. 00:38:05.554 [2024-12-13 03:49:06.615144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.554 [2024-12-13 03:49:06.615186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.554 qpair failed and we were unable to recover it. 00:38:05.554 [2024-12-13 03:49:06.615315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.554 [2024-12-13 03:49:06.615356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.554 qpair failed and we were unable to recover it. 00:38:05.554 [2024-12-13 03:49:06.615497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.554 [2024-12-13 03:49:06.615540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.554 qpair failed and we were unable to recover it. 00:38:05.554 [2024-12-13 03:49:06.615707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.554 [2024-12-13 03:49:06.615720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.554 qpair failed and we were unable to recover it. 00:38:05.554 [2024-12-13 03:49:06.615873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.554 [2024-12-13 03:49:06.615888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.554 qpair failed and we were unable to recover it. 00:38:05.554 [2024-12-13 03:49:06.615953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.554 [2024-12-13 03:49:06.615966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.554 qpair failed and we were unable to recover it. 00:38:05.554 [2024-12-13 03:49:06.616061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.554 [2024-12-13 03:49:06.616074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.554 qpair failed and we were unable to recover it. 00:38:05.554 [2024-12-13 03:49:06.616230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.554 [2024-12-13 03:49:06.616243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.554 qpair failed and we were unable to recover it. 00:38:05.554 [2024-12-13 03:49:06.616324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.554 [2024-12-13 03:49:06.616337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.554 qpair failed and we were unable to recover it. 00:38:05.554 [2024-12-13 03:49:06.616431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.554 [2024-12-13 03:49:06.616444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.554 qpair failed and we were unable to recover it. 00:38:05.554 [2024-12-13 03:49:06.616599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.554 [2024-12-13 03:49:06.616613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.554 qpair failed and we were unable to recover it. 00:38:05.554 [2024-12-13 03:49:06.616792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.554 [2024-12-13 03:49:06.616805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.554 qpair failed and we were unable to recover it. 00:38:05.554 [2024-12-13 03:49:06.616948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.554 [2024-12-13 03:49:06.616962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.554 qpair failed and we were unable to recover it. 00:38:05.554 [2024-12-13 03:49:06.617050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.554 [2024-12-13 03:49:06.617062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.554 qpair failed and we were unable to recover it. 00:38:05.554 [2024-12-13 03:49:06.617157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.554 [2024-12-13 03:49:06.617172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.554 qpair failed and we were unable to recover it. 00:38:05.554 [2024-12-13 03:49:06.617237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.554 [2024-12-13 03:49:06.617256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.554 qpair failed and we were unable to recover it. 00:38:05.554 [2024-12-13 03:49:06.617325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.554 [2024-12-13 03:49:06.617338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.554 qpair failed and we were unable to recover it. 00:38:05.554 [2024-12-13 03:49:06.617415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.554 [2024-12-13 03:49:06.617428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.554 qpair failed and we were unable to recover it. 00:38:05.554 [2024-12-13 03:49:06.617599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.554 [2024-12-13 03:49:06.617613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.554 qpair failed and we were unable to recover it. 00:38:05.554 [2024-12-13 03:49:06.617678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.554 [2024-12-13 03:49:06.617691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.554 qpair failed and we were unable to recover it. 00:38:05.554 [2024-12-13 03:49:06.617758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.554 [2024-12-13 03:49:06.617771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.554 qpair failed and we were unable to recover it. 00:38:05.554 [2024-12-13 03:49:06.617905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.554 [2024-12-13 03:49:06.617921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.554 qpair failed and we were unable to recover it. 00:38:05.554 [2024-12-13 03:49:06.618066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.554 [2024-12-13 03:49:06.618080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.554 qpair failed and we were unable to recover it. 00:38:05.554 [2024-12-13 03:49:06.618247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.554 [2024-12-13 03:49:06.618270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.554 qpair failed and we were unable to recover it. 00:38:05.554 [2024-12-13 03:49:06.618362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.554 [2024-12-13 03:49:06.618381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.554 qpair failed and we were unable to recover it. 00:38:05.554 [2024-12-13 03:49:06.618528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.554 [2024-12-13 03:49:06.618549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.554 qpair failed and we were unable to recover it. 00:38:05.554 [2024-12-13 03:49:06.618658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.554 [2024-12-13 03:49:06.618679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.554 qpair failed and we were unable to recover it. 00:38:05.554 [2024-12-13 03:49:06.618906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.554 [2024-12-13 03:49:06.618936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.554 qpair failed and we were unable to recover it. 00:38:05.554 [2024-12-13 03:49:06.619161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.554 [2024-12-13 03:49:06.619181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.554 qpair failed and we were unable to recover it. 00:38:05.554 [2024-12-13 03:49:06.619341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.554 [2024-12-13 03:49:06.619358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.554 qpair failed and we were unable to recover it. 00:38:05.554 [2024-12-13 03:49:06.619525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.554 [2024-12-13 03:49:06.619539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.554 qpair failed and we were unable to recover it. 00:38:05.554 [2024-12-13 03:49:06.619727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.619771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.620062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.620104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.620258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.620297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.620451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.620465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.620619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.620633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.620784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.620834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.621059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.621102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.621365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.621407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.621565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.621608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.621836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.621877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.622041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.622084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.622303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.622346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.622629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.622669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.622908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.622981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.623141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.623183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.623383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.623397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.623568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.623609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.623755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.623796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.623948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.623990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.624264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.624311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.624457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.624470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.624692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.624734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.624869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.624910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.625119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.625160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.625295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.625336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.625481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.625494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.625701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.625744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.625957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.626001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.626205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.626246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.626343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.626355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.626456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.626469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.626720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.626733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.626886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.626899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.627059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.627073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.627219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.627232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.627390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.627403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.627481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.627494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.627747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.627760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.627856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.555 [2024-12-13 03:49:06.627869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.555 qpair failed and we were unable to recover it. 00:38:05.555 [2024-12-13 03:49:06.627961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.627974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.628106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.628119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.628327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.628340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.628491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.628504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.628641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.628655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.628738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.628750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.628908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.628925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.629134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.629147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.629233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.629246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.629378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.629392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.629527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.629541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.629641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.629654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.629854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.629868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.630077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.630091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.630231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.630245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.630326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.630343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.630414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.630427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.630525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.630538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.630619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.630631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.630720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.630734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.630816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.630828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.630987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.630999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.631143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.631157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.631239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.631251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.631385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.631400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.631544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.631557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.631715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.631730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.631805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.631817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.631893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.631906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.631998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.632015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.632159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.632174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.632328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.632342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.632413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.632427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.632519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.632534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.632612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.632624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.632761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.632774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.632844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.632864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.632959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.632972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.633054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.633066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.633302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.556 [2024-12-13 03:49:06.633316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.556 qpair failed and we were unable to recover it. 00:38:05.556 [2024-12-13 03:49:06.633463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.633476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.633638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.633652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.633823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.633836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.633915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.633933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.634072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.634085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.634228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.634243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.634392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.634407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.634546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.634559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.634708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.634721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.634802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.634815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.634972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.634988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.635247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.635261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.635329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.635341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.635486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.635499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.635589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.635603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.635747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.635761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.635986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.636041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.636194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.636252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.636479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.636520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.636718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.636731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.636812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.636825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.636963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.636978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.637154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.637189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.637383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.637425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.637592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.637647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.637870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.637913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.638152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.638196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.638340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.638361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.638510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.638530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.638721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.638742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.638890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.638910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.639078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.639119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.639257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.639298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.639461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.639510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.639700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.639742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.640032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.640077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.640281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.640322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.640452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.640472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.640666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.640687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.640901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.640922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.641064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.641077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.557 [2024-12-13 03:49:06.641312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.557 [2024-12-13 03:49:06.641354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.557 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.641501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.641543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.641686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.641727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.641942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.641986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.642122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.642162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.642421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.642462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.642695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.642709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.642792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.642805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.642913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.642931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.643136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.643149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.643351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.643366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.643512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.643526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.643749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.643793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.643943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.643999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.644197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.644240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.644379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.644393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.644618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.644661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.644805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.644846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.645052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.645095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.645365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.645406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.645641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.645689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.645920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.645933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.646142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.646157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.646247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.646260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.646461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.646474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.646610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.646624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.646713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.646726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.646896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.646910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.647087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.647128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.647329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.647371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.647652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.647692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.647962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.648006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.648305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.648354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.648530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.648574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.648730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.648745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.648896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.648910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.649047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.649061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.649225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.649238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.649387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.649401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.558 [2024-12-13 03:49:06.649546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.558 [2024-12-13 03:49:06.649560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.558 qpair failed and we were unable to recover it. 00:38:05.559 [2024-12-13 03:49:06.649714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.559 [2024-12-13 03:49:06.649727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.559 qpair failed and we were unable to recover it. 00:38:05.559 [2024-12-13 03:49:06.649807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.559 [2024-12-13 03:49:06.649819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.559 qpair failed and we were unable to recover it. 00:38:05.559 [2024-12-13 03:49:06.649965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.559 [2024-12-13 03:49:06.649979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.559 qpair failed and we were unable to recover it. 00:38:05.559 [2024-12-13 03:49:06.650059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.559 [2024-12-13 03:49:06.650072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.559 qpair failed and we were unable to recover it. 00:38:05.559 [2024-12-13 03:49:06.650153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.559 [2024-12-13 03:49:06.650167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.559 qpair failed and we were unable to recover it. 00:38:05.559 [2024-12-13 03:49:06.650241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.559 [2024-12-13 03:49:06.650253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.559 qpair failed and we were unable to recover it. 00:38:05.559 [2024-12-13 03:49:06.650346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.559 [2024-12-13 03:49:06.650359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.559 qpair failed and we were unable to recover it. 00:38:05.559 [2024-12-13 03:49:06.650504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.559 [2024-12-13 03:49:06.650518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.559 qpair failed and we were unable to recover it. 00:38:05.559 [2024-12-13 03:49:06.650599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.559 [2024-12-13 03:49:06.650613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.559 qpair failed and we were unable to recover it. 00:38:05.559 [2024-12-13 03:49:06.650692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.559 [2024-12-13 03:49:06.650704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.559 qpair failed and we were unable to recover it. 00:38:05.559 [2024-12-13 03:49:06.650853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.559 [2024-12-13 03:49:06.650867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.559 qpair failed and we were unable to recover it. 00:38:05.559 [2024-12-13 03:49:06.650948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.559 [2024-12-13 03:49:06.650961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.559 qpair failed and we were unable to recover it. 00:38:05.559 [2024-12-13 03:49:06.651052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.559 [2024-12-13 03:49:06.651064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.559 qpair failed and we were unable to recover it. 00:38:05.559 [2024-12-13 03:49:06.651138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.559 [2024-12-13 03:49:06.651151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.559 qpair failed and we were unable to recover it. 00:38:05.559 [2024-12-13 03:49:06.651238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.559 [2024-12-13 03:49:06.651251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.559 qpair failed and we were unable to recover it. 00:38:05.559 [2024-12-13 03:49:06.651368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.559 [2024-12-13 03:49:06.651380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.559 qpair failed and we were unable to recover it. 00:38:05.559 [2024-12-13 03:49:06.651517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.559 [2024-12-13 03:49:06.651534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.559 qpair failed and we were unable to recover it. 00:38:05.559 [2024-12-13 03:49:06.651767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.559 [2024-12-13 03:49:06.651783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.559 qpair failed and we were unable to recover it. 00:38:05.559 [2024-12-13 03:49:06.651875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.559 [2024-12-13 03:49:06.651887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.559 qpair failed and we were unable to recover it. 00:38:05.559 [2024-12-13 03:49:06.652010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.559 [2024-12-13 03:49:06.652024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.559 qpair failed and we were unable to recover it. 00:38:05.559 [2024-12-13 03:49:06.652105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.559 [2024-12-13 03:49:06.652117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.559 qpair failed and we were unable to recover it. 00:38:05.559 [2024-12-13 03:49:06.652185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.559 [2024-12-13 03:49:06.652197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.559 qpair failed and we were unable to recover it. 00:38:05.559 [2024-12-13 03:49:06.652277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.559 [2024-12-13 03:49:06.652291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.559 qpair failed and we were unable to recover it. 00:38:05.559 [2024-12-13 03:49:06.652374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.559 [2024-12-13 03:49:06.652386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.559 qpair failed and we were unable to recover it. 00:38:05.559 [2024-12-13 03:49:06.652481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.559 [2024-12-13 03:49:06.652494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.559 qpair failed and we were unable to recover it. 00:38:05.559 [2024-12-13 03:49:06.652564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.559 [2024-12-13 03:49:06.652577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.559 qpair failed and we were unable to recover it. 00:38:05.559 [2024-12-13 03:49:06.652777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.559 [2024-12-13 03:49:06.652791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.559 qpair failed and we were unable to recover it. 00:38:05.559 [2024-12-13 03:49:06.652866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.559 [2024-12-13 03:49:06.652878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.559 qpair failed and we were unable to recover it. 00:38:05.559 [2024-12-13 03:49:06.652968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.559 [2024-12-13 03:49:06.652982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.559 qpair failed and we were unable to recover it. 00:38:05.559 [2024-12-13 03:49:06.653064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.559 [2024-12-13 03:49:06.653078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.559 qpair failed and we were unable to recover it. 00:38:05.559 [2024-12-13 03:49:06.653179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.559 [2024-12-13 03:49:06.653193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.559 qpair failed and we were unable to recover it. 00:38:05.559 [2024-12-13 03:49:06.653269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.559 [2024-12-13 03:49:06.653281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.559 qpair failed and we were unable to recover it. 00:38:05.559 [2024-12-13 03:49:06.653484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.559 [2024-12-13 03:49:06.653501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.559 qpair failed and we were unable to recover it. 00:38:05.559 [2024-12-13 03:49:06.653637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.559 [2024-12-13 03:49:06.653651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.559 qpair failed and we were unable to recover it. 00:38:05.559 [2024-12-13 03:49:06.653752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.559 [2024-12-13 03:49:06.653764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.559 qpair failed and we were unable to recover it. 00:38:05.559 [2024-12-13 03:49:06.653839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.559 [2024-12-13 03:49:06.653851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.559 qpair failed and we were unable to recover it. 00:38:05.559 [2024-12-13 03:49:06.653965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.559 [2024-12-13 03:49:06.653980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.559 qpair failed and we were unable to recover it. 00:38:05.843 [2024-12-13 03:49:06.654128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.843 [2024-12-13 03:49:06.654147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.843 qpair failed and we were unable to recover it. 00:38:05.843 [2024-12-13 03:49:06.654331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.843 [2024-12-13 03:49:06.654376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.843 qpair failed and we were unable to recover it. 00:38:05.843 [2024-12-13 03:49:06.654486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.843 [2024-12-13 03:49:06.654510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.843 qpair failed and we were unable to recover it. 00:38:05.843 [2024-12-13 03:49:06.654720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.843 [2024-12-13 03:49:06.654742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.843 qpair failed and we were unable to recover it. 00:38:05.843 [2024-12-13 03:49:06.654847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.843 [2024-12-13 03:49:06.654870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.843 qpair failed and we were unable to recover it. 00:38:05.843 [2024-12-13 03:49:06.654962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.843 [2024-12-13 03:49:06.654983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.843 qpair failed and we were unable to recover it. 00:38:05.843 [2024-12-13 03:49:06.655169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.843 [2024-12-13 03:49:06.655189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.843 qpair failed and we were unable to recover it. 00:38:05.843 [2024-12-13 03:49:06.655355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.843 [2024-12-13 03:49:06.655371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.843 qpair failed and we were unable to recover it. 00:38:05.843 [2024-12-13 03:49:06.655576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.843 [2024-12-13 03:49:06.655591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.843 qpair failed and we were unable to recover it. 00:38:05.843 [2024-12-13 03:49:06.655666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.843 [2024-12-13 03:49:06.655678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.843 qpair failed and we were unable to recover it. 00:38:05.843 [2024-12-13 03:49:06.655886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.843 [2024-12-13 03:49:06.655901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.843 qpair failed and we were unable to recover it. 00:38:05.843 [2024-12-13 03:49:06.656009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.843 [2024-12-13 03:49:06.656022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.843 qpair failed and we were unable to recover it. 00:38:05.843 [2024-12-13 03:49:06.656158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.843 [2024-12-13 03:49:06.656172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.843 qpair failed and we were unable to recover it. 00:38:05.843 [2024-12-13 03:49:06.656269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.843 [2024-12-13 03:49:06.656282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.843 qpair failed and we were unable to recover it. 00:38:05.843 [2024-12-13 03:49:06.656365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.843 [2024-12-13 03:49:06.656378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.843 qpair failed and we were unable to recover it. 00:38:05.843 [2024-12-13 03:49:06.656621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.843 [2024-12-13 03:49:06.656665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.843 qpair failed and we were unable to recover it. 00:38:05.843 [2024-12-13 03:49:06.656860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.843 [2024-12-13 03:49:06.656910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.843 qpair failed and we were unable to recover it. 00:38:05.843 [2024-12-13 03:49:06.657067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.657110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.844 [2024-12-13 03:49:06.657341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.657357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.844 [2024-12-13 03:49:06.657449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.657462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.844 [2024-12-13 03:49:06.657530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.657542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.844 [2024-12-13 03:49:06.657713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.657727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.844 [2024-12-13 03:49:06.657817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.657830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.844 [2024-12-13 03:49:06.658040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.658055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.844 [2024-12-13 03:49:06.658147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.658160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.844 [2024-12-13 03:49:06.658353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.658368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.844 [2024-12-13 03:49:06.658511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.658526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.844 [2024-12-13 03:49:06.658616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.658630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.844 [2024-12-13 03:49:06.658786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.658800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.844 [2024-12-13 03:49:06.658879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.658891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.844 [2024-12-13 03:49:06.659036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.659051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.844 [2024-12-13 03:49:06.659133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.659145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.844 [2024-12-13 03:49:06.659283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.659297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.844 [2024-12-13 03:49:06.659396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.659409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.844 [2024-12-13 03:49:06.659621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.659635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.844 [2024-12-13 03:49:06.659770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.659790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.844 [2024-12-13 03:49:06.659874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.659890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.844 [2024-12-13 03:49:06.660041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.660055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.844 [2024-12-13 03:49:06.660206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.660220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.844 [2024-12-13 03:49:06.660309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.660322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.844 [2024-12-13 03:49:06.660457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.660472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.844 [2024-12-13 03:49:06.660612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.660627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.844 [2024-12-13 03:49:06.660786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.660839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.844 [2024-12-13 03:49:06.660974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.661019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.844 [2024-12-13 03:49:06.661147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.661188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.844 [2024-12-13 03:49:06.661413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.661454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.844 [2024-12-13 03:49:06.661662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.661713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.844 [2024-12-13 03:49:06.661930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.661975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.844 [2024-12-13 03:49:06.662137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.662204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.844 [2024-12-13 03:49:06.662351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.662392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.844 [2024-12-13 03:49:06.662657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.662699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.844 [2024-12-13 03:49:06.662905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.662963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.844 [2024-12-13 03:49:06.663264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.663305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.844 [2024-12-13 03:49:06.663524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.663538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.844 [2024-12-13 03:49:06.663639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.844 [2024-12-13 03:49:06.663653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.844 qpair failed and we were unable to recover it. 00:38:05.845 [2024-12-13 03:49:06.663800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.845 [2024-12-13 03:49:06.663813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.845 qpair failed and we were unable to recover it. 00:38:05.845 [2024-12-13 03:49:06.663900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.845 [2024-12-13 03:49:06.663913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.845 qpair failed and we were unable to recover it. 00:38:05.845 [2024-12-13 03:49:06.664073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.845 [2024-12-13 03:49:06.664087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.845 qpair failed and we were unable to recover it. 00:38:05.845 [2024-12-13 03:49:06.664169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.845 [2024-12-13 03:49:06.664181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.845 qpair failed and we were unable to recover it. 00:38:05.845 [2024-12-13 03:49:06.664253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.845 [2024-12-13 03:49:06.664266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.845 qpair failed and we were unable to recover it. 00:38:05.845 [2024-12-13 03:49:06.664415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.845 [2024-12-13 03:49:06.664429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.845 qpair failed and we were unable to recover it. 00:38:05.845 [2024-12-13 03:49:06.664582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.845 [2024-12-13 03:49:06.664595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.845 qpair failed and we were unable to recover it. 00:38:05.845 [2024-12-13 03:49:06.664806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.845 [2024-12-13 03:49:06.664848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.845 qpair failed and we were unable to recover it. 00:38:05.845 [2024-12-13 03:49:06.665083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.845 [2024-12-13 03:49:06.665125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.845 qpair failed and we were unable to recover it. 00:38:05.845 [2024-12-13 03:49:06.665270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.845 [2024-12-13 03:49:06.665311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.845 qpair failed and we were unable to recover it. 00:38:05.845 [2024-12-13 03:49:06.665546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.845 [2024-12-13 03:49:06.665560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.845 qpair failed and we were unable to recover it. 00:38:05.845 [2024-12-13 03:49:06.665655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.845 [2024-12-13 03:49:06.665668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.845 qpair failed and we were unable to recover it. 00:38:05.845 [2024-12-13 03:49:06.665804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.845 [2024-12-13 03:49:06.665818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.845 qpair failed and we were unable to recover it. 00:38:05.845 [2024-12-13 03:49:06.665969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.845 [2024-12-13 03:49:06.665983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.845 qpair failed and we were unable to recover it. 00:38:05.845 [2024-12-13 03:49:06.666065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.845 [2024-12-13 03:49:06.666078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.845 qpair failed and we were unable to recover it. 00:38:05.845 [2024-12-13 03:49:06.666283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.845 [2024-12-13 03:49:06.666302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.845 qpair failed and we were unable to recover it. 00:38:05.845 [2024-12-13 03:49:06.666402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.845 [2024-12-13 03:49:06.666415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.845 qpair failed and we were unable to recover it. 00:38:05.845 [2024-12-13 03:49:06.666501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.845 [2024-12-13 03:49:06.666515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.845 qpair failed and we were unable to recover it. 00:38:05.845 [2024-12-13 03:49:06.666583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.845 [2024-12-13 03:49:06.666595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.845 qpair failed and we were unable to recover it. 00:38:05.845 [2024-12-13 03:49:06.666677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.845 [2024-12-13 03:49:06.666690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.845 qpair failed and we were unable to recover it. 00:38:05.845 [2024-12-13 03:49:06.666775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.845 [2024-12-13 03:49:06.666793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.845 qpair failed and we were unable to recover it. 00:38:05.845 [2024-12-13 03:49:06.666872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.845 [2024-12-13 03:49:06.666884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.845 qpair failed and we were unable to recover it. 00:38:05.845 [2024-12-13 03:49:06.666987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.845 [2024-12-13 03:49:06.667001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.845 qpair failed and we were unable to recover it. 00:38:05.845 [2024-12-13 03:49:06.667165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.845 [2024-12-13 03:49:06.667178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.845 qpair failed and we were unable to recover it. 00:38:05.845 [2024-12-13 03:49:06.667259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.845 [2024-12-13 03:49:06.667271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.845 qpair failed and we were unable to recover it. 00:38:05.845 [2024-12-13 03:49:06.667363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.845 [2024-12-13 03:49:06.667377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.845 qpair failed and we were unable to recover it. 00:38:05.845 [2024-12-13 03:49:06.667444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.845 [2024-12-13 03:49:06.667456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.845 qpair failed and we were unable to recover it. 00:38:05.845 [2024-12-13 03:49:06.667534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.845 [2024-12-13 03:49:06.667546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.845 qpair failed and we were unable to recover it. 00:38:05.845 [2024-12-13 03:49:06.667617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.845 [2024-12-13 03:49:06.667630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.845 qpair failed and we were unable to recover it. 00:38:05.845 [2024-12-13 03:49:06.667710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.845 [2024-12-13 03:49:06.667722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.845 qpair failed and we were unable to recover it. 00:38:05.845 [2024-12-13 03:49:06.667940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.845 [2024-12-13 03:49:06.667954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.845 qpair failed and we were unable to recover it. 00:38:05.845 [2024-12-13 03:49:06.668041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.845 [2024-12-13 03:49:06.668055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.845 qpair failed and we were unable to recover it. 00:38:05.845 [2024-12-13 03:49:06.668144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.845 [2024-12-13 03:49:06.668157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.845 qpair failed and we were unable to recover it. 00:38:05.845 [2024-12-13 03:49:06.668294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.845 [2024-12-13 03:49:06.668308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.845 qpair failed and we were unable to recover it. 00:38:05.845 [2024-12-13 03:49:06.668447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.845 [2024-12-13 03:49:06.668461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.845 qpair failed and we were unable to recover it. 00:38:05.845 [2024-12-13 03:49:06.668542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.668555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.846 [2024-12-13 03:49:06.668695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.668709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.846 [2024-12-13 03:49:06.668794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.668806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.846 [2024-12-13 03:49:06.668896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.668912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.846 [2024-12-13 03:49:06.669067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.669080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.846 [2024-12-13 03:49:06.669159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.669172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.846 [2024-12-13 03:49:06.669242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.669254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.846 [2024-12-13 03:49:06.669487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.669529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.846 [2024-12-13 03:49:06.669666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.669708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.846 [2024-12-13 03:49:06.669949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.669992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.846 [2024-12-13 03:49:06.670237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.670279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.846 [2024-12-13 03:49:06.670491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.670532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.846 [2024-12-13 03:49:06.670794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.670807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.846 [2024-12-13 03:49:06.670944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.670957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.846 [2024-12-13 03:49:06.671048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.671062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.846 [2024-12-13 03:49:06.671259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.671272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.846 [2024-12-13 03:49:06.671457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.671499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.846 [2024-12-13 03:49:06.671672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.671727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.846 [2024-12-13 03:49:06.672011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.672058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.846 [2024-12-13 03:49:06.672367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.672414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.846 [2024-12-13 03:49:06.672636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.672688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.846 [2024-12-13 03:49:06.672859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.672880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.846 [2024-12-13 03:49:06.673114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.673159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.846 [2024-12-13 03:49:06.673360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.673380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.846 [2024-12-13 03:49:06.673637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.673679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.846 [2024-12-13 03:49:06.673870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.673931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.846 [2024-12-13 03:49:06.674223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.674265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.846 [2024-12-13 03:49:06.674514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.674534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.846 [2024-12-13 03:49:06.674642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.674662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.846 [2024-12-13 03:49:06.674886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.674937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.846 [2024-12-13 03:49:06.675134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.675175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.846 [2024-12-13 03:49:06.675294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.675336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.846 [2024-12-13 03:49:06.675660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.675701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.846 [2024-12-13 03:49:06.675969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.676011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.846 [2024-12-13 03:49:06.676207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.676248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.846 [2024-12-13 03:49:06.676456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.676498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.846 [2024-12-13 03:49:06.676652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.676673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.846 [2024-12-13 03:49:06.676822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.846 [2024-12-13 03:49:06.676842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.846 qpair failed and we were unable to recover it. 00:38:05.847 [2024-12-13 03:49:06.677010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.847 [2024-12-13 03:49:06.677028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.847 qpair failed and we were unable to recover it. 00:38:05.847 [2024-12-13 03:49:06.677189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.847 [2024-12-13 03:49:06.677204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.847 qpair failed and we were unable to recover it. 00:38:05.847 [2024-12-13 03:49:06.677293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.847 [2024-12-13 03:49:06.677306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.847 qpair failed and we were unable to recover it. 00:38:05.847 [2024-12-13 03:49:06.677453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.847 [2024-12-13 03:49:06.677490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.847 qpair failed and we were unable to recover it. 00:38:05.847 [2024-12-13 03:49:06.677807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.847 [2024-12-13 03:49:06.677849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.847 qpair failed and we were unable to recover it. 00:38:05.847 [2024-12-13 03:49:06.678160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.847 [2024-12-13 03:49:06.678211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.847 qpair failed and we were unable to recover it. 00:38:05.847 [2024-12-13 03:49:06.678428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.847 [2024-12-13 03:49:06.678442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.847 qpair failed and we were unable to recover it. 00:38:05.847 [2024-12-13 03:49:06.678599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.847 [2024-12-13 03:49:06.678614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.847 qpair failed and we were unable to recover it. 00:38:05.847 [2024-12-13 03:49:06.678708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.847 [2024-12-13 03:49:06.678722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.847 qpair failed and we were unable to recover it. 00:38:05.847 [2024-12-13 03:49:06.678878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.847 [2024-12-13 03:49:06.678891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.847 qpair failed and we were unable to recover it. 00:38:05.847 [2024-12-13 03:49:06.679036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.847 [2024-12-13 03:49:06.679050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.847 qpair failed and we were unable to recover it. 00:38:05.847 [2024-12-13 03:49:06.679143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.847 [2024-12-13 03:49:06.679156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.847 qpair failed and we were unable to recover it. 00:38:05.847 [2024-12-13 03:49:06.679308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.847 [2024-12-13 03:49:06.679322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.847 qpair failed and we were unable to recover it. 00:38:05.847 [2024-12-13 03:49:06.679561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.847 [2024-12-13 03:49:06.679605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.847 qpair failed and we were unable to recover it. 00:38:05.847 [2024-12-13 03:49:06.679808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.847 [2024-12-13 03:49:06.679856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.847 qpair failed and we were unable to recover it. 00:38:05.847 [2024-12-13 03:49:06.680163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.847 [2024-12-13 03:49:06.680208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.847 qpair failed and we were unable to recover it. 00:38:05.847 [2024-12-13 03:49:06.680475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.847 [2024-12-13 03:49:06.680515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.847 qpair failed and we were unable to recover it. 00:38:05.847 [2024-12-13 03:49:06.680674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.847 [2024-12-13 03:49:06.680717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.847 qpair failed and we were unable to recover it. 00:38:05.847 [2024-12-13 03:49:06.680847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.847 [2024-12-13 03:49:06.680867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.847 qpair failed and we were unable to recover it. 00:38:05.847 [2024-12-13 03:49:06.680970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.847 [2024-12-13 03:49:06.680992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.847 qpair failed and we were unable to recover it. 00:38:05.847 [2024-12-13 03:49:06.681159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.847 [2024-12-13 03:49:06.681181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.847 qpair failed and we were unable to recover it. 00:38:05.847 [2024-12-13 03:49:06.681341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.847 [2024-12-13 03:49:06.681362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.847 qpair failed and we were unable to recover it. 00:38:05.847 [2024-12-13 03:49:06.681614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.847 [2024-12-13 03:49:06.681657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.847 qpair failed and we were unable to recover it. 00:38:05.847 [2024-12-13 03:49:06.681798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.847 [2024-12-13 03:49:06.681839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.847 qpair failed and we were unable to recover it. 00:38:05.847 [2024-12-13 03:49:06.682161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.847 [2024-12-13 03:49:06.682221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.847 qpair failed and we were unable to recover it. 00:38:05.847 [2024-12-13 03:49:06.682449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.847 [2024-12-13 03:49:06.682493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.847 qpair failed and we were unable to recover it. 00:38:05.847 [2024-12-13 03:49:06.682689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.847 [2024-12-13 03:49:06.682732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.847 qpair failed and we were unable to recover it. 00:38:05.847 [2024-12-13 03:49:06.682946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.847 [2024-12-13 03:49:06.682972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.847 qpair failed and we were unable to recover it. 00:38:05.847 [2024-12-13 03:49:06.683211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.847 [2024-12-13 03:49:06.683231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.847 qpair failed and we were unable to recover it. 00:38:05.847 [2024-12-13 03:49:06.683410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.847 [2024-12-13 03:49:06.683428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.847 qpair failed and we were unable to recover it. 00:38:05.847 [2024-12-13 03:49:06.683571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.847 [2024-12-13 03:49:06.683584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.847 qpair failed and we were unable to recover it. 00:38:05.847 [2024-12-13 03:49:06.683756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.847 [2024-12-13 03:49:06.683770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.847 qpair failed and we were unable to recover it. 00:38:05.847 [2024-12-13 03:49:06.683916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.847 [2024-12-13 03:49:06.683972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.847 qpair failed and we were unable to recover it. 00:38:05.847 [2024-12-13 03:49:06.684193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.847 [2024-12-13 03:49:06.684235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.847 qpair failed and we were unable to recover it. 00:38:05.847 [2024-12-13 03:49:06.684459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.847 [2024-12-13 03:49:06.684501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.847 qpair failed and we were unable to recover it. 00:38:05.847 [2024-12-13 03:49:06.684663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.847 [2024-12-13 03:49:06.684677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.847 qpair failed and we were unable to recover it. 00:38:05.847 [2024-12-13 03:49:06.684850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.848 [2024-12-13 03:49:06.684864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.848 qpair failed and we were unable to recover it. 00:38:05.848 [2024-12-13 03:49:06.685027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.848 [2024-12-13 03:49:06.685041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.848 qpair failed and we were unable to recover it. 00:38:05.848 [2024-12-13 03:49:06.685180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.848 [2024-12-13 03:49:06.685194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.848 qpair failed and we were unable to recover it. 00:38:05.848 [2024-12-13 03:49:06.685288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.848 [2024-12-13 03:49:06.685304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.848 qpair failed and we were unable to recover it. 00:38:05.848 [2024-12-13 03:49:06.685377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.848 [2024-12-13 03:49:06.685390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.848 qpair failed and we were unable to recover it. 00:38:05.848 [2024-12-13 03:49:06.685531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.848 [2024-12-13 03:49:06.685545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.848 qpair failed and we were unable to recover it. 00:38:05.848 [2024-12-13 03:49:06.685641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.848 [2024-12-13 03:49:06.685681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.848 qpair failed and we were unable to recover it. 00:38:05.848 [2024-12-13 03:49:06.685833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.848 [2024-12-13 03:49:06.685847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.848 qpair failed and we were unable to recover it. 00:38:05.848 [2024-12-13 03:49:06.685937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.848 [2024-12-13 03:49:06.685951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.848 qpair failed and we were unable to recover it. 00:38:05.848 [2024-12-13 03:49:06.686087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.848 [2024-12-13 03:49:06.686101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.848 qpair failed and we were unable to recover it. 00:38:05.848 [2024-12-13 03:49:06.686309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.848 [2024-12-13 03:49:06.686350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.848 qpair failed and we were unable to recover it. 00:38:05.848 [2024-12-13 03:49:06.686505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.848 [2024-12-13 03:49:06.686547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.848 qpair failed and we were unable to recover it. 00:38:05.848 [2024-12-13 03:49:06.686706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.848 [2024-12-13 03:49:06.686752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.848 qpair failed and we were unable to recover it. 00:38:05.848 [2024-12-13 03:49:06.686883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.848 [2024-12-13 03:49:06.686903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.848 qpair failed and we were unable to recover it. 00:38:05.848 [2024-12-13 03:49:06.687071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.848 [2024-12-13 03:49:06.687094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.848 qpair failed and we were unable to recover it. 00:38:05.848 [2024-12-13 03:49:06.687171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.848 [2024-12-13 03:49:06.687185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.848 qpair failed and we were unable to recover it. 00:38:05.848 [2024-12-13 03:49:06.687283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.848 [2024-12-13 03:49:06.687297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.848 qpair failed and we were unable to recover it. 00:38:05.848 [2024-12-13 03:49:06.687370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.848 [2024-12-13 03:49:06.687384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.848 qpair failed and we were unable to recover it. 00:38:05.848 [2024-12-13 03:49:06.687544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.848 [2024-12-13 03:49:06.687566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.848 qpair failed and we were unable to recover it. 00:38:05.848 [2024-12-13 03:49:06.687740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.848 [2024-12-13 03:49:06.687761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.848 qpair failed and we were unable to recover it. 00:38:05.848 [2024-12-13 03:49:06.687890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.848 [2024-12-13 03:49:06.687911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.848 qpair failed and we were unable to recover it. 00:38:05.848 [2024-12-13 03:49:06.688091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.848 [2024-12-13 03:49:06.688139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.848 qpair failed and we were unable to recover it. 00:38:05.848 [2024-12-13 03:49:06.688425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.848 [2024-12-13 03:49:06.688467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.848 qpair failed and we were unable to recover it. 00:38:05.848 [2024-12-13 03:49:06.688700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.848 [2024-12-13 03:49:06.688743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.848 qpair failed and we were unable to recover it. 00:38:05.848 [2024-12-13 03:49:06.688957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.848 [2024-12-13 03:49:06.689001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.848 qpair failed and we were unable to recover it. 00:38:05.848 [2024-12-13 03:49:06.689206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.848 [2024-12-13 03:49:06.689249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.848 qpair failed and we were unable to recover it. 00:38:05.848 [2024-12-13 03:49:06.689395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.848 [2024-12-13 03:49:06.689436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.848 qpair failed and we were unable to recover it. 00:38:05.848 [2024-12-13 03:49:06.689648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.848 [2024-12-13 03:49:06.689689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.848 qpair failed and we were unable to recover it. 00:38:05.848 [2024-12-13 03:49:06.689891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.848 [2024-12-13 03:49:06.689913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.848 qpair failed and we were unable to recover it. 00:38:05.848 [2024-12-13 03:49:06.690094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.848 [2024-12-13 03:49:06.690115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.848 qpair failed and we were unable to recover it. 00:38:05.848 [2024-12-13 03:49:06.690295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.848 [2024-12-13 03:49:06.690340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.848 qpair failed and we were unable to recover it. 00:38:05.848 [2024-12-13 03:49:06.690507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.690556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.690857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.690898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.691109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.691152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.691444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.691486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.691603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.691617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.691750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.691763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.691896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.691910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.692011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.692023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.692252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.692266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.692405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.692419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.692516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.692528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.692611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.692624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.692825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.692839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.692925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.692938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.693083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.693097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.693230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.693244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.693393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.693406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.693501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.693513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.693604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.693618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.693691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.693703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.693785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.693798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.693869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.693882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.694067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.694081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.694159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.694172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.694313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.694327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.694422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.694435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.694566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.694580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.694693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.694738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.695022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.695070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.695276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.695299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.695462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.695477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.695620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.695633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.695732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.695745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.695907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.695932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.696033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.696047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.696218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.696234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.696318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.696331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.696486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.696500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.849 [2024-12-13 03:49:06.696566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.849 [2024-12-13 03:49:06.696579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.849 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.696721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.696733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.696810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.696826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.696891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.696904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.697003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.697015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.697160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.697174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.697331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.697346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.697481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.697495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.697632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.697647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.697726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.697738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.697809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.697822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.697903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.697921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.698008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.698022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.698169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.698182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.698267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.698280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.698362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.698374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.698514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.698528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.698630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.698644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.698724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.698738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.698877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.698900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.698976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.698991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.699139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.699153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.699318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.699332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.699474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.699488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.699570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.699582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.699664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.699676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.699817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.699830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.699980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.699995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.700100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.700113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.700203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.700233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.700415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.700443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.700668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.700691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.700784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.700800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.700882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.700894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.700975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.700989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.701124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.701137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.701213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.701225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.701293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.701307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.701454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.850 [2024-12-13 03:49:06.701468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.850 qpair failed and we were unable to recover it. 00:38:05.850 [2024-12-13 03:49:06.701590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.701615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.701718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.701741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.701839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.701861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.701952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.701972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.702111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.702125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.702194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.702207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.702343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.702356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.702510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.702523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.702661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.702675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.702829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.702845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.702929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.702941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.703085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.703099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.703331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.703345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.703487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.703501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.703665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.703717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.703853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.703895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.704105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.704150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.704384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.704444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.704577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.704620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.704825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.704867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.705177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.705219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.705437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.705478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.705735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.705780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.706067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.706112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.706388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.706430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.706659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.706709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.706851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.706907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.707061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.707075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.707283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.707297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.707532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.707583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.707747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.707793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.708014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.708064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.708367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.708415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.708655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.708710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.708812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.708835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.708953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.708974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.709069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.709085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.709172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.709185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.851 qpair failed and we were unable to recover it. 00:38:05.851 [2024-12-13 03:49:06.709343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.851 [2024-12-13 03:49:06.709357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.852 qpair failed and we were unable to recover it. 00:38:05.852 [2024-12-13 03:49:06.709507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.852 [2024-12-13 03:49:06.709521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.852 qpair failed and we were unable to recover it. 00:38:05.852 [2024-12-13 03:49:06.709665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.852 [2024-12-13 03:49:06.709679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.852 qpair failed and we were unable to recover it. 00:38:05.852 [2024-12-13 03:49:06.709828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.852 [2024-12-13 03:49:06.709842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.852 qpair failed and we were unable to recover it. 00:38:05.852 [2024-12-13 03:49:06.710002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.852 [2024-12-13 03:49:06.710016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.852 qpair failed and we were unable to recover it. 00:38:05.852 [2024-12-13 03:49:06.710112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.852 [2024-12-13 03:49:06.710129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.852 qpair failed and we were unable to recover it. 00:38:05.852 [2024-12-13 03:49:06.710217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.852 [2024-12-13 03:49:06.710231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.852 qpair failed and we were unable to recover it. 00:38:05.852 [2024-12-13 03:49:06.710453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.852 [2024-12-13 03:49:06.710466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.852 qpair failed and we were unable to recover it. 00:38:05.852 [2024-12-13 03:49:06.710550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.852 [2024-12-13 03:49:06.710562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.852 qpair failed and we were unable to recover it. 00:38:05.852 [2024-12-13 03:49:06.710647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.852 [2024-12-13 03:49:06.710661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.852 qpair failed and we were unable to recover it. 00:38:05.852 [2024-12-13 03:49:06.710860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.852 [2024-12-13 03:49:06.710874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.852 qpair failed and we were unable to recover it. 00:38:05.852 [2024-12-13 03:49:06.710953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.852 [2024-12-13 03:49:06.710967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.852 qpair failed and we were unable to recover it. 00:38:05.852 [2024-12-13 03:49:06.711076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.852 [2024-12-13 03:49:06.711091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.852 qpair failed and we were unable to recover it. 00:38:05.852 [2024-12-13 03:49:06.711319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.852 [2024-12-13 03:49:06.711333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.852 qpair failed and we were unable to recover it. 00:38:05.852 [2024-12-13 03:49:06.711414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.852 [2024-12-13 03:49:06.711427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.852 qpair failed and we were unable to recover it. 00:38:05.852 [2024-12-13 03:49:06.711652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.852 [2024-12-13 03:49:06.711694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.852 qpair failed and we were unable to recover it. 00:38:05.852 [2024-12-13 03:49:06.711931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.852 [2024-12-13 03:49:06.711974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.852 qpair failed and we were unable to recover it. 00:38:05.852 [2024-12-13 03:49:06.712239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.852 [2024-12-13 03:49:06.712291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.852 qpair failed and we were unable to recover it. 00:38:05.852 [2024-12-13 03:49:06.712439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.852 [2024-12-13 03:49:06.712497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.852 qpair failed and we were unable to recover it. 00:38:05.852 [2024-12-13 03:49:06.712663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.852 [2024-12-13 03:49:06.712708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.852 qpair failed and we were unable to recover it. 00:38:05.852 [2024-12-13 03:49:06.712852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.852 [2024-12-13 03:49:06.712895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.852 qpair failed and we were unable to recover it. 00:38:05.852 [2024-12-13 03:49:06.713070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.852 [2024-12-13 03:49:06.713083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.852 qpair failed and we were unable to recover it. 00:38:05.852 [2024-12-13 03:49:06.713180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.852 [2024-12-13 03:49:06.713193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.852 qpair failed and we were unable to recover it. 00:38:05.852 [2024-12-13 03:49:06.713289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.852 [2024-12-13 03:49:06.713303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.852 qpair failed and we were unable to recover it. 00:38:05.852 [2024-12-13 03:49:06.713460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.852 [2024-12-13 03:49:06.713473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.852 qpair failed and we were unable to recover it. 00:38:05.852 [2024-12-13 03:49:06.713653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.852 [2024-12-13 03:49:06.713666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.852 qpair failed and we were unable to recover it. 00:38:05.852 [2024-12-13 03:49:06.713909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.852 [2024-12-13 03:49:06.713966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.852 qpair failed and we were unable to recover it. 00:38:05.852 [2024-12-13 03:49:06.714273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.852 [2024-12-13 03:49:06.714315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.852 qpair failed and we were unable to recover it. 00:38:05.852 [2024-12-13 03:49:06.714566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.852 [2024-12-13 03:49:06.714609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.852 qpair failed and we were unable to recover it. 00:38:05.852 [2024-12-13 03:49:06.714871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.852 [2024-12-13 03:49:06.714912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.852 qpair failed and we were unable to recover it. 00:38:05.852 [2024-12-13 03:49:06.715102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.852 [2024-12-13 03:49:06.715145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.852 qpair failed and we were unable to recover it. 00:38:05.852 [2024-12-13 03:49:06.715323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.852 [2024-12-13 03:49:06.715366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.852 qpair failed and we were unable to recover it. 00:38:05.852 [2024-12-13 03:49:06.715533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.852 [2024-12-13 03:49:06.715578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.852 qpair failed and we were unable to recover it. 00:38:05.853 [2024-12-13 03:49:06.715716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.853 [2024-12-13 03:49:06.715743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.853 qpair failed and we were unable to recover it. 00:38:05.853 [2024-12-13 03:49:06.715929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.853 [2024-12-13 03:49:06.715990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.853 qpair failed and we were unable to recover it. 00:38:05.853 [2024-12-13 03:49:06.716218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.853 [2024-12-13 03:49:06.716259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.853 qpair failed and we were unable to recover it. 00:38:05.853 [2024-12-13 03:49:06.716454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.853 [2024-12-13 03:49:06.716508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.853 qpair failed and we were unable to recover it. 00:38:05.853 [2024-12-13 03:49:06.716608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.853 [2024-12-13 03:49:06.716629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.853 qpair failed and we were unable to recover it. 00:38:05.853 [2024-12-13 03:49:06.716779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.853 [2024-12-13 03:49:06.716795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.853 qpair failed and we were unable to recover it. 00:38:05.853 [2024-12-13 03:49:06.716977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.853 [2024-12-13 03:49:06.717022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.853 qpair failed and we were unable to recover it. 00:38:05.853 [2024-12-13 03:49:06.717168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.853 [2024-12-13 03:49:06.717211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.853 qpair failed and we were unable to recover it. 00:38:05.853 [2024-12-13 03:49:06.717337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.853 [2024-12-13 03:49:06.717377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.853 qpair failed and we were unable to recover it. 00:38:05.853 [2024-12-13 03:49:06.717581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.853 [2024-12-13 03:49:06.717622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.853 qpair failed and we were unable to recover it. 00:38:05.853 [2024-12-13 03:49:06.717731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.853 [2024-12-13 03:49:06.717745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.853 qpair failed and we were unable to recover it. 00:38:05.853 [2024-12-13 03:49:06.717902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.853 [2024-12-13 03:49:06.717920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.853 qpair failed and we were unable to recover it. 00:38:05.853 [2024-12-13 03:49:06.718062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.853 [2024-12-13 03:49:06.718078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.853 qpair failed and we were unable to recover it. 00:38:05.853 [2024-12-13 03:49:06.718218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.853 [2024-12-13 03:49:06.718232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.853 qpair failed and we were unable to recover it. 00:38:05.853 [2024-12-13 03:49:06.718450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.853 [2024-12-13 03:49:06.718464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.853 qpair failed and we were unable to recover it. 00:38:05.853 [2024-12-13 03:49:06.718626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.853 [2024-12-13 03:49:06.718639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.853 qpair failed and we were unable to recover it. 00:38:05.853 [2024-12-13 03:49:06.718741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.853 [2024-12-13 03:49:06.718754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.853 qpair failed and we were unable to recover it. 00:38:05.853 [2024-12-13 03:49:06.718892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.853 [2024-12-13 03:49:06.718905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.853 qpair failed and we were unable to recover it. 00:38:05.853 [2024-12-13 03:49:06.718991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.853 [2024-12-13 03:49:06.719011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.853 qpair failed and we were unable to recover it. 00:38:05.853 [2024-12-13 03:49:06.719199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.853 [2024-12-13 03:49:06.719224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.853 qpair failed and we were unable to recover it. 00:38:05.853 [2024-12-13 03:49:06.719390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.853 [2024-12-13 03:49:06.719414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.853 qpair failed and we were unable to recover it. 00:38:05.853 [2024-12-13 03:49:06.719646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.853 [2024-12-13 03:49:06.719695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.853 qpair failed and we were unable to recover it. 00:38:05.853 [2024-12-13 03:49:06.719968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.853 [2024-12-13 03:49:06.720012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.853 qpair failed and we were unable to recover it. 00:38:05.853 [2024-12-13 03:49:06.720250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.853 [2024-12-13 03:49:06.720293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.853 qpair failed and we were unable to recover it. 00:38:05.853 [2024-12-13 03:49:06.720438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.853 [2024-12-13 03:49:06.720481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.853 qpair failed and we were unable to recover it. 00:38:05.853 [2024-12-13 03:49:06.720676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.853 [2024-12-13 03:49:06.720718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.853 qpair failed and we were unable to recover it. 00:38:05.853 [2024-12-13 03:49:06.720992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.853 [2024-12-13 03:49:06.721006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.853 qpair failed and we were unable to recover it. 00:38:05.853 [2024-12-13 03:49:06.721156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.853 [2024-12-13 03:49:06.721170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.853 qpair failed and we were unable to recover it. 00:38:05.853 [2024-12-13 03:49:06.721398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.853 [2024-12-13 03:49:06.721412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.853 qpair failed and we were unable to recover it. 00:38:05.853 [2024-12-13 03:49:06.721592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.853 [2024-12-13 03:49:06.721641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.853 qpair failed and we were unable to recover it. 00:38:05.853 [2024-12-13 03:49:06.721849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.853 [2024-12-13 03:49:06.721926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.853 qpair failed and we were unable to recover it. 00:38:05.853 [2024-12-13 03:49:06.722211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.853 [2024-12-13 03:49:06.722254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.853 qpair failed and we were unable to recover it. 00:38:05.853 [2024-12-13 03:49:06.722492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.853 [2024-12-13 03:49:06.722533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.853 qpair failed and we were unable to recover it. 00:38:05.853 [2024-12-13 03:49:06.722745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.853 [2024-12-13 03:49:06.722786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.853 qpair failed and we were unable to recover it. 00:38:05.853 [2024-12-13 03:49:06.723046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.853 [2024-12-13 03:49:06.723088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.853 qpair failed and we were unable to recover it. 00:38:05.853 [2024-12-13 03:49:06.723341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-12-13 03:49:06.723408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.854 qpair failed and we were unable to recover it. 00:38:05.854 [2024-12-13 03:49:06.723527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-12-13 03:49:06.723542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.854 qpair failed and we were unable to recover it. 00:38:05.854 [2024-12-13 03:49:06.723740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-12-13 03:49:06.723753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.854 qpair failed and we were unable to recover it. 00:38:05.854 [2024-12-13 03:49:06.723859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-12-13 03:49:06.723872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.854 qpair failed and we were unable to recover it. 00:38:05.854 [2024-12-13 03:49:06.724046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-12-13 03:49:06.724065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.854 qpair failed and we were unable to recover it. 00:38:05.854 [2024-12-13 03:49:06.724264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-12-13 03:49:06.724278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.854 qpair failed and we were unable to recover it. 00:38:05.854 [2024-12-13 03:49:06.724478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-12-13 03:49:06.724492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.854 qpair failed and we were unable to recover it. 00:38:05.854 [2024-12-13 03:49:06.724636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-12-13 03:49:06.724650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.854 qpair failed and we were unable to recover it. 00:38:05.854 [2024-12-13 03:49:06.724739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-12-13 03:49:06.724752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.854 qpair failed and we were unable to recover it. 00:38:05.854 [2024-12-13 03:49:06.724824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-12-13 03:49:06.724836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.854 qpair failed and we were unable to recover it. 00:38:05.854 [2024-12-13 03:49:06.725011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-12-13 03:49:06.725025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.854 qpair failed and we were unable to recover it. 00:38:05.854 [2024-12-13 03:49:06.725112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-12-13 03:49:06.725126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.854 qpair failed and we were unable to recover it. 00:38:05.854 [2024-12-13 03:49:06.725293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-12-13 03:49:06.725306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.854 qpair failed and we were unable to recover it. 00:38:05.854 [2024-12-13 03:49:06.725541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-12-13 03:49:06.725555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.854 qpair failed and we were unable to recover it. 00:38:05.854 [2024-12-13 03:49:06.725690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-12-13 03:49:06.725703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.854 qpair failed and we were unable to recover it. 00:38:05.854 [2024-12-13 03:49:06.725835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-12-13 03:49:06.725848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.854 qpair failed and we were unable to recover it. 00:38:05.854 [2024-12-13 03:49:06.726074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-12-13 03:49:06.726116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.854 qpair failed and we were unable to recover it. 00:38:05.854 [2024-12-13 03:49:06.726327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-12-13 03:49:06.726370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.854 qpair failed and we were unable to recover it. 00:38:05.854 [2024-12-13 03:49:06.726587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-12-13 03:49:06.726627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.854 qpair failed and we were unable to recover it. 00:38:05.854 [2024-12-13 03:49:06.726814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-12-13 03:49:06.726855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.854 qpair failed and we were unable to recover it. 00:38:05.854 [2024-12-13 03:49:06.727027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-12-13 03:49:06.727070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.854 qpair failed and we were unable to recover it. 00:38:05.854 [2024-12-13 03:49:06.727353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-12-13 03:49:06.727394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.854 qpair failed and we were unable to recover it. 00:38:05.854 [2024-12-13 03:49:06.727647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-12-13 03:49:06.727691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.854 qpair failed and we were unable to recover it. 00:38:05.854 [2024-12-13 03:49:06.727895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-12-13 03:49:06.727968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.854 qpair failed and we were unable to recover it. 00:38:05.854 [2024-12-13 03:49:06.728175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-12-13 03:49:06.728188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.854 qpair failed and we were unable to recover it. 00:38:05.854 [2024-12-13 03:49:06.728339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-12-13 03:49:06.728352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.854 qpair failed and we were unable to recover it. 00:38:05.854 [2024-12-13 03:49:06.728424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-12-13 03:49:06.728435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.854 qpair failed and we were unable to recover it. 00:38:05.854 [2024-12-13 03:49:06.728503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-12-13 03:49:06.728517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.854 qpair failed and we were unable to recover it. 00:38:05.854 [2024-12-13 03:49:06.728683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-12-13 03:49:06.728697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.854 qpair failed and we were unable to recover it. 00:38:05.854 [2024-12-13 03:49:06.728792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-12-13 03:49:06.728805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.854 qpair failed and we were unable to recover it. 00:38:05.854 [2024-12-13 03:49:06.728958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-12-13 03:49:06.728971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.854 qpair failed and we were unable to recover it. 00:38:05.854 [2024-12-13 03:49:06.729217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-12-13 03:49:06.729260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.854 qpair failed and we were unable to recover it. 00:38:05.854 [2024-12-13 03:49:06.729559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-12-13 03:49:06.729601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.854 qpair failed and we were unable to recover it. 00:38:05.854 [2024-12-13 03:49:06.729825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-12-13 03:49:06.729867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.854 qpair failed and we were unable to recover it. 00:38:05.854 [2024-12-13 03:49:06.730166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-12-13 03:49:06.730180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.854 qpair failed and we were unable to recover it. 00:38:05.854 [2024-12-13 03:49:06.730269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-12-13 03:49:06.730282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.854 qpair failed and we were unable to recover it. 00:38:05.854 [2024-12-13 03:49:06.730425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.854 [2024-12-13 03:49:06.730439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.854 qpair failed and we were unable to recover it. 00:38:05.854 [2024-12-13 03:49:06.730602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.730615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.730817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.730831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.731095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.731109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.731274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.731287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.731510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.731523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.731627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.731640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.731744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.731756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.731959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.731974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.732155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.732169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.732358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.732371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.732519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.732543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.732822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.732864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.733073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.733124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.733433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.733475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.733683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.733725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.733932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.733945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.734030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.734042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.734240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.734254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.734393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.734407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.734640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.734687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.734834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.734874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.735154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.735210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.735523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.735569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.735838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.735896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.736170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.736213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.736527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.736571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.736853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.736894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.737106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.737148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.737466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.737528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.737714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.737736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.737950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.737973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.738152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.738173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.738363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.738383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.738551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.738567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.738810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.738854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.739180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.739223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.739372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.739414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.739673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.739715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.855 [2024-12-13 03:49:06.739993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.855 [2024-12-13 03:49:06.740045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.855 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.740255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-12-13 03:49:06.740297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.856 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.740488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-12-13 03:49:06.740529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.856 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.740739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-12-13 03:49:06.740757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.856 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.740927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-12-13 03:49:06.740942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.856 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.741174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-12-13 03:49:06.741215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.856 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.741493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-12-13 03:49:06.741535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.856 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.741732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-12-13 03:49:06.741745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.856 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.741884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-12-13 03:49:06.741898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.856 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.742109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-12-13 03:49:06.742170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.856 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.742378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-12-13 03:49:06.742420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.856 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.742613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-12-13 03:49:06.742678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.856 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.742828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-12-13 03:49:06.742842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.856 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.742986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-12-13 03:49:06.743001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.856 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.743220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-12-13 03:49:06.743233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.856 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.743394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-12-13 03:49:06.743407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.856 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.743542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-12-13 03:49:06.743555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.856 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.743804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-12-13 03:49:06.743818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.856 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.743971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-12-13 03:49:06.743985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.856 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.744151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-12-13 03:49:06.744164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.856 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.744387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-12-13 03:49:06.744448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.856 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.744710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-12-13 03:49:06.744751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.856 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.745021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-12-13 03:49:06.745034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.856 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.745326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-12-13 03:49:06.745342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.856 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.745543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-12-13 03:49:06.745563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.856 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.745739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-12-13 03:49:06.745753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.856 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.745972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-12-13 03:49:06.745986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.856 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.746123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-12-13 03:49:06.746137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.856 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.746375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-12-13 03:49:06.746418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.856 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.746699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-12-13 03:49:06.746740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.856 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.747034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-12-13 03:49:06.747049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.856 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.747133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-12-13 03:49:06.747147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.856 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.747280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-12-13 03:49:06.747293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.856 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.747509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-12-13 03:49:06.747523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.856 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.747732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-12-13 03:49:06.747746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.856 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.747904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-12-13 03:49:06.747927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.856 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.748118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-12-13 03:49:06.748163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.856 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.748452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.856 [2024-12-13 03:49:06.748495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.856 qpair failed and we were unable to recover it. 00:38:05.856 [2024-12-13 03:49:06.748737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.857 [2024-12-13 03:49:06.748781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.857 qpair failed and we were unable to recover it. 00:38:05.857 [2024-12-13 03:49:06.748992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.857 [2024-12-13 03:49:06.749036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.857 qpair failed and we were unable to recover it. 00:38:05.857 [2024-12-13 03:49:06.749305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.857 [2024-12-13 03:49:06.749348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.857 qpair failed and we were unable to recover it. 00:38:05.857 [2024-12-13 03:49:06.749568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.857 [2024-12-13 03:49:06.749613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.857 qpair failed and we were unable to recover it. 00:38:05.857 [2024-12-13 03:49:06.749923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.857 [2024-12-13 03:49:06.749938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.857 qpair failed and we were unable to recover it. 00:38:05.857 [2024-12-13 03:49:06.750099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.857 [2024-12-13 03:49:06.750113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.857 qpair failed and we were unable to recover it. 00:38:05.857 [2024-12-13 03:49:06.750342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.857 [2024-12-13 03:49:06.750359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.857 qpair failed and we were unable to recover it. 00:38:05.857 [2024-12-13 03:49:06.750561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.857 [2024-12-13 03:49:06.750575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.857 qpair failed and we were unable to recover it. 00:38:05.857 [2024-12-13 03:49:06.750701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.857 [2024-12-13 03:49:06.750715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.857 qpair failed and we were unable to recover it. 00:38:05.857 [2024-12-13 03:49:06.750890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.857 [2024-12-13 03:49:06.750910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.857 qpair failed and we were unable to recover it. 00:38:05.857 [2024-12-13 03:49:06.751016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.857 [2024-12-13 03:49:06.751029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.857 qpair failed and we were unable to recover it. 00:38:05.857 [2024-12-13 03:49:06.751261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.857 [2024-12-13 03:49:06.751313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.857 qpair failed and we were unable to recover it. 00:38:05.857 [2024-12-13 03:49:06.751619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.857 [2024-12-13 03:49:06.751670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.857 qpair failed and we were unable to recover it. 00:38:05.857 [2024-12-13 03:49:06.751935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.857 [2024-12-13 03:49:06.751950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.857 qpair failed and we were unable to recover it. 00:38:05.857 [2024-12-13 03:49:06.752136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.857 [2024-12-13 03:49:06.752151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.857 qpair failed and we were unable to recover it. 00:38:05.857 [2024-12-13 03:49:06.752362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.857 [2024-12-13 03:49:06.752404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.857 qpair failed and we were unable to recover it. 00:38:05.857 [2024-12-13 03:49:06.752543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.857 [2024-12-13 03:49:06.752584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.857 qpair failed and we were unable to recover it. 00:38:05.857 [2024-12-13 03:49:06.752875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.857 [2024-12-13 03:49:06.752934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.857 qpair failed and we were unable to recover it. 00:38:05.857 [2024-12-13 03:49:06.753247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.857 [2024-12-13 03:49:06.753290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.857 qpair failed and we were unable to recover it. 00:38:05.857 [2024-12-13 03:49:06.753497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.857 [2024-12-13 03:49:06.753533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.857 qpair failed and we were unable to recover it. 00:38:05.857 [2024-12-13 03:49:06.753708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.857 [2024-12-13 03:49:06.753722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.857 qpair failed and we were unable to recover it. 00:38:05.857 [2024-12-13 03:49:06.753820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.857 [2024-12-13 03:49:06.753834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.857 qpair failed and we were unable to recover it. 00:38:05.857 [2024-12-13 03:49:06.754035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.857 [2024-12-13 03:49:06.754056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.857 qpair failed and we were unable to recover it. 00:38:05.857 [2024-12-13 03:49:06.754271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.857 [2024-12-13 03:49:06.754285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.857 qpair failed and we were unable to recover it. 00:38:05.857 [2024-12-13 03:49:06.754512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.857 [2024-12-13 03:49:06.754530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.857 qpair failed and we were unable to recover it. 00:38:05.857 [2024-12-13 03:49:06.754792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.857 [2024-12-13 03:49:06.754806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.857 qpair failed and we were unable to recover it. 00:38:05.857 [2024-12-13 03:49:06.754877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.857 [2024-12-13 03:49:06.754891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.857 qpair failed and we were unable to recover it. 00:38:05.857 [2024-12-13 03:49:06.755053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.857 [2024-12-13 03:49:06.755067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.857 qpair failed and we were unable to recover it. 00:38:05.857 [2024-12-13 03:49:06.755222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.857 [2024-12-13 03:49:06.755235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.857 qpair failed and we were unable to recover it. 00:38:05.857 [2024-12-13 03:49:06.755465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.857 [2024-12-13 03:49:06.755479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.857 qpair failed and we were unable to recover it. 00:38:05.857 [2024-12-13 03:49:06.755624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.857 [2024-12-13 03:49:06.755637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.857 qpair failed and we were unable to recover it. 00:38:05.857 [2024-12-13 03:49:06.755784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.857 [2024-12-13 03:49:06.755798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.857 qpair failed and we were unable to recover it. 00:38:05.857 [2024-12-13 03:49:06.755951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.858 [2024-12-13 03:49:06.755965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-13 03:49:06.756110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.858 [2024-12-13 03:49:06.756123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-13 03:49:06.756257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.858 [2024-12-13 03:49:06.756271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-13 03:49:06.756420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.858 [2024-12-13 03:49:06.756441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-13 03:49:06.756594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.858 [2024-12-13 03:49:06.756608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-13 03:49:06.756679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.858 [2024-12-13 03:49:06.756691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-13 03:49:06.756848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.858 [2024-12-13 03:49:06.756861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-13 03:49:06.757067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.858 [2024-12-13 03:49:06.757112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-13 03:49:06.757350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.858 [2024-12-13 03:49:06.757392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-13 03:49:06.757548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.858 [2024-12-13 03:49:06.757593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-13 03:49:06.757814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.858 [2024-12-13 03:49:06.757828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-13 03:49:06.758019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.858 [2024-12-13 03:49:06.758039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-13 03:49:06.758239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.858 [2024-12-13 03:49:06.758253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-13 03:49:06.758422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.858 [2024-12-13 03:49:06.758437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-13 03:49:06.758666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.858 [2024-12-13 03:49:06.758681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-13 03:49:06.758818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.858 [2024-12-13 03:49:06.758833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-13 03:49:06.758971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.858 [2024-12-13 03:49:06.758985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-13 03:49:06.759134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.858 [2024-12-13 03:49:06.759148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-13 03:49:06.759377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.858 [2024-12-13 03:49:06.759397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-13 03:49:06.759532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.858 [2024-12-13 03:49:06.759548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-13 03:49:06.759631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.858 [2024-12-13 03:49:06.759645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-13 03:49:06.759845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.858 [2024-12-13 03:49:06.759860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-13 03:49:06.760136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.858 [2024-12-13 03:49:06.760150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-13 03:49:06.760307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.858 [2024-12-13 03:49:06.760321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-13 03:49:06.760458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.858 [2024-12-13 03:49:06.760472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-13 03:49:06.760714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.858 [2024-12-13 03:49:06.760756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-13 03:49:06.761050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.858 [2024-12-13 03:49:06.761099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-13 03:49:06.761333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.858 [2024-12-13 03:49:06.761376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-13 03:49:06.761640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.858 [2024-12-13 03:49:06.761695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-13 03:49:06.761857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.858 [2024-12-13 03:49:06.761871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-13 03:49:06.761968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.858 [2024-12-13 03:49:06.761983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-13 03:49:06.762195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.858 [2024-12-13 03:49:06.762209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-13 03:49:06.762419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.858 [2024-12-13 03:49:06.762433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-13 03:49:06.762576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.858 [2024-12-13 03:49:06.762622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-13 03:49:06.762898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.858 [2024-12-13 03:49:06.762959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-13 03:49:06.763256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.858 [2024-12-13 03:49:06.763297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-13 03:49:06.763518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.858 [2024-12-13 03:49:06.763580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.858 qpair failed and we were unable to recover it. 00:38:05.858 [2024-12-13 03:49:06.763818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.763831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.764043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.764059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.764165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.764177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.764393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.764406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.764477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.764490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.764584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.764597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.764777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.764790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.765047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.765089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.765348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.765390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.765693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.765735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.765961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.766003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.766312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.766353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.766622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.766663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.766968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.767011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.767249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.767291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.767502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.767543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.767824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.767866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.768032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.768076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.768367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.768410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.768687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.768729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.768987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.769033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.769241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.769255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.769452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.769470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.769709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.769725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.769904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.769923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.770145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.770158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.770458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.770500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.770786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.770830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.771126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.771140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.771296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.771310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.771536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.771550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.771758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.771793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.772055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.772099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.772253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.772302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.772502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.772544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.772829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.772876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.773104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.773118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.773358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.773373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.773526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.859 [2024-12-13 03:49:06.773540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.859 qpair failed and we were unable to recover it. 00:38:05.859 [2024-12-13 03:49:06.773762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.773775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.860 qpair failed and we were unable to recover it. 00:38:05.860 [2024-12-13 03:49:06.773928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.773946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.860 qpair failed and we were unable to recover it. 00:38:05.860 [2024-12-13 03:49:06.774150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.774164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.860 qpair failed and we were unable to recover it. 00:38:05.860 [2024-12-13 03:49:06.774394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.774435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.860 qpair failed and we were unable to recover it. 00:38:05.860 [2024-12-13 03:49:06.774577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.774619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.860 qpair failed and we were unable to recover it. 00:38:05.860 [2024-12-13 03:49:06.774912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.774979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.860 qpair failed and we were unable to recover it. 00:38:05.860 [2024-12-13 03:49:06.775117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.775170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.860 qpair failed and we were unable to recover it. 00:38:05.860 [2024-12-13 03:49:06.775314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.775355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.860 qpair failed and we were unable to recover it. 00:38:05.860 [2024-12-13 03:49:06.775639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.775681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.860 qpair failed and we were unable to recover it. 00:38:05.860 [2024-12-13 03:49:06.775937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.775965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.860 qpair failed and we were unable to recover it. 00:38:05.860 [2024-12-13 03:49:06.776194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.776207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.860 qpair failed and we were unable to recover it. 00:38:05.860 [2024-12-13 03:49:06.776356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.776369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.860 qpair failed and we were unable to recover it. 00:38:05.860 [2024-12-13 03:49:06.776459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.776472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.860 qpair failed and we were unable to recover it. 00:38:05.860 [2024-12-13 03:49:06.776622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.776637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.860 qpair failed and we were unable to recover it. 00:38:05.860 [2024-12-13 03:49:06.776718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.776731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.860 qpair failed and we were unable to recover it. 00:38:05.860 [2024-12-13 03:49:06.776981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.776995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.860 qpair failed and we were unable to recover it. 00:38:05.860 [2024-12-13 03:49:06.777087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.777100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.860 qpair failed and we were unable to recover it. 00:38:05.860 [2024-12-13 03:49:06.777273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.777286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.860 qpair failed and we were unable to recover it. 00:38:05.860 [2024-12-13 03:49:06.777488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.777502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.860 qpair failed and we were unable to recover it. 00:38:05.860 [2024-12-13 03:49:06.777734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.777747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.860 qpair failed and we were unable to recover it. 00:38:05.860 [2024-12-13 03:49:06.777894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.777908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.860 qpair failed and we were unable to recover it. 00:38:05.860 [2024-12-13 03:49:06.778001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.778013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.860 qpair failed and we were unable to recover it. 00:38:05.860 [2024-12-13 03:49:06.778158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.778171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.860 qpair failed and we were unable to recover it. 00:38:05.860 [2024-12-13 03:49:06.778305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.778320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.860 qpair failed and we were unable to recover it. 00:38:05.860 [2024-12-13 03:49:06.778464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.778478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.860 qpair failed and we were unable to recover it. 00:38:05.860 [2024-12-13 03:49:06.778614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.778627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.860 qpair failed and we were unable to recover it. 00:38:05.860 [2024-12-13 03:49:06.778778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.778791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.860 qpair failed and we were unable to recover it. 00:38:05.860 [2024-12-13 03:49:06.779050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.779064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.860 qpair failed and we were unable to recover it. 00:38:05.860 [2024-12-13 03:49:06.779238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.779251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.860 qpair failed and we were unable to recover it. 00:38:05.860 [2024-12-13 03:49:06.779428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.779441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.860 qpair failed and we were unable to recover it. 00:38:05.860 [2024-12-13 03:49:06.779629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.779674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.860 qpair failed and we were unable to recover it. 00:38:05.860 [2024-12-13 03:49:06.779933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.780020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.860 qpair failed and we were unable to recover it. 00:38:05.860 [2024-12-13 03:49:06.780316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.780400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.860 qpair failed and we were unable to recover it. 00:38:05.860 [2024-12-13 03:49:06.780631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.780677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.860 qpair failed and we were unable to recover it. 00:38:05.860 [2024-12-13 03:49:06.780965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.781017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.860 qpair failed and we were unable to recover it. 00:38:05.860 [2024-12-13 03:49:06.781322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.781343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.860 qpair failed and we were unable to recover it. 00:38:05.860 [2024-12-13 03:49:06.781580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.781602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.860 qpair failed and we were unable to recover it. 00:38:05.860 [2024-12-13 03:49:06.781782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.860 [2024-12-13 03:49:06.781803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.861 qpair failed and we were unable to recover it. 00:38:05.861 [2024-12-13 03:49:06.781965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.861 [2024-12-13 03:49:06.782008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.861 qpair failed and we were unable to recover it. 00:38:05.861 [2024-12-13 03:49:06.782147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.861 [2024-12-13 03:49:06.782188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.861 qpair failed and we were unable to recover it. 00:38:05.861 [2024-12-13 03:49:06.782431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.861 [2024-12-13 03:49:06.782473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.861 qpair failed and we were unable to recover it. 00:38:05.861 [2024-12-13 03:49:06.782770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.861 [2024-12-13 03:49:06.782812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.861 qpair failed and we were unable to recover it. 00:38:05.861 [2024-12-13 03:49:06.783091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.861 [2024-12-13 03:49:06.783113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.861 qpair failed and we were unable to recover it. 00:38:05.861 [2024-12-13 03:49:06.783372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.861 [2024-12-13 03:49:06.783420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.861 qpair failed and we were unable to recover it. 00:38:05.861 [2024-12-13 03:49:06.783576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.861 [2024-12-13 03:49:06.783619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.861 qpair failed and we were unable to recover it. 00:38:05.861 [2024-12-13 03:49:06.783888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.861 [2024-12-13 03:49:06.783948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.861 qpair failed and we were unable to recover it. 00:38:05.861 [2024-12-13 03:49:06.784101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.861 [2024-12-13 03:49:06.784143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.861 qpair failed and we were unable to recover it. 00:38:05.861 [2024-12-13 03:49:06.784415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.861 [2024-12-13 03:49:06.784457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.861 qpair failed and we were unable to recover it. 00:38:05.861 [2024-12-13 03:49:06.784729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.861 [2024-12-13 03:49:06.784774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.861 qpair failed and we were unable to recover it. 00:38:05.861 [2024-12-13 03:49:06.784935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.861 [2024-12-13 03:49:06.784949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.861 qpair failed and we were unable to recover it. 00:38:05.861 [2024-12-13 03:49:06.785108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.861 [2024-12-13 03:49:06.785144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.861 qpair failed and we were unable to recover it. 00:38:05.861 [2024-12-13 03:49:06.785453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.861 [2024-12-13 03:49:06.785494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.861 qpair failed and we were unable to recover it. 00:38:05.861 [2024-12-13 03:49:06.785813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.861 [2024-12-13 03:49:06.785863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.861 qpair failed and we were unable to recover it. 00:38:05.861 [2024-12-13 03:49:06.786086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.861 [2024-12-13 03:49:06.786128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.861 qpair failed and we were unable to recover it. 00:38:05.861 [2024-12-13 03:49:06.786334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.861 [2024-12-13 03:49:06.786375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.861 qpair failed and we were unable to recover it. 00:38:05.861 [2024-12-13 03:49:06.786576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.861 [2024-12-13 03:49:06.786620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.861 qpair failed and we were unable to recover it. 00:38:05.861 [2024-12-13 03:49:06.786886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.861 [2024-12-13 03:49:06.786900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.861 qpair failed and we were unable to recover it. 00:38:05.861 [2024-12-13 03:49:06.787057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.861 [2024-12-13 03:49:06.787071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.861 qpair failed and we were unable to recover it. 00:38:05.861 [2024-12-13 03:49:06.787270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.861 [2024-12-13 03:49:06.787284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.861 qpair failed and we were unable to recover it. 00:38:05.861 [2024-12-13 03:49:06.787457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.861 [2024-12-13 03:49:06.787471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.861 qpair failed and we were unable to recover it. 00:38:05.861 [2024-12-13 03:49:06.787633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.861 [2024-12-13 03:49:06.787647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.861 qpair failed and we were unable to recover it. 00:38:05.861 [2024-12-13 03:49:06.787891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.861 [2024-12-13 03:49:06.787904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.861 qpair failed and we were unable to recover it. 00:38:05.861 [2024-12-13 03:49:06.788043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.861 [2024-12-13 03:49:06.788057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.861 qpair failed and we were unable to recover it. 00:38:05.861 [2024-12-13 03:49:06.788283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.861 [2024-12-13 03:49:06.788299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.861 qpair failed and we were unable to recover it. 00:38:05.861 [2024-12-13 03:49:06.788518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.861 [2024-12-13 03:49:06.788531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.861 qpair failed and we were unable to recover it. 00:38:05.861 [2024-12-13 03:49:06.788695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.861 [2024-12-13 03:49:06.788708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.861 qpair failed and we were unable to recover it. 00:38:05.861 [2024-12-13 03:49:06.788933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.861 [2024-12-13 03:49:06.788980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.861 qpair failed and we were unable to recover it. 00:38:05.861 [2024-12-13 03:49:06.789284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.861 [2024-12-13 03:49:06.789325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.861 qpair failed and we were unable to recover it. 00:38:05.861 [2024-12-13 03:49:06.789601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.861 [2024-12-13 03:49:06.789641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.861 qpair failed and we were unable to recover it. 00:38:05.861 [2024-12-13 03:49:06.789857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.861 [2024-12-13 03:49:06.789870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.861 qpair failed and we were unable to recover it. 00:38:05.861 [2024-12-13 03:49:06.790039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.861 [2024-12-13 03:49:06.790054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.861 qpair failed and we were unable to recover it. 00:38:05.861 [2024-12-13 03:49:06.790322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.861 [2024-12-13 03:49:06.790362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.861 qpair failed and we were unable to recover it. 00:38:05.861 [2024-12-13 03:49:06.790626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.861 [2024-12-13 03:49:06.790677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.861 qpair failed and we were unable to recover it. 00:38:05.862 [2024-12-13 03:49:06.790904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.862 [2024-12-13 03:49:06.790928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.862 qpair failed and we were unable to recover it. 00:38:05.862 [2024-12-13 03:49:06.791153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.862 [2024-12-13 03:49:06.791166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.862 qpair failed and we were unable to recover it. 00:38:05.862 [2024-12-13 03:49:06.791365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.862 [2024-12-13 03:49:06.791378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.862 qpair failed and we were unable to recover it. 00:38:05.862 [2024-12-13 03:49:06.791578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.862 [2024-12-13 03:49:06.791591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.862 qpair failed and we were unable to recover it. 00:38:05.862 [2024-12-13 03:49:06.791737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.862 [2024-12-13 03:49:06.791751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.862 qpair failed and we were unable to recover it. 00:38:05.862 [2024-12-13 03:49:06.791952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.862 [2024-12-13 03:49:06.791965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.862 qpair failed and we were unable to recover it. 00:38:05.862 [2024-12-13 03:49:06.792110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.862 [2024-12-13 03:49:06.792123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.862 qpair failed and we were unable to recover it. 00:38:05.862 [2024-12-13 03:49:06.792279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.862 [2024-12-13 03:49:06.792293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.862 qpair failed and we were unable to recover it. 00:38:05.862 [2024-12-13 03:49:06.792385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.862 [2024-12-13 03:49:06.792398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.862 qpair failed and we were unable to recover it. 00:38:05.862 [2024-12-13 03:49:06.792620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.862 [2024-12-13 03:49:06.792643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.862 qpair failed and we were unable to recover it. 00:38:05.862 [2024-12-13 03:49:06.792820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.862 [2024-12-13 03:49:06.792835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.862 qpair failed and we were unable to recover it. 00:38:05.862 [2024-12-13 03:49:06.792980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.862 [2024-12-13 03:49:06.792994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.862 qpair failed and we were unable to recover it. 00:38:05.862 [2024-12-13 03:49:06.793100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.862 [2024-12-13 03:49:06.793117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.862 qpair failed and we were unable to recover it. 00:38:05.862 [2024-12-13 03:49:06.793329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.862 [2024-12-13 03:49:06.793342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.862 qpair failed and we were unable to recover it. 00:38:05.862 [2024-12-13 03:49:06.793513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.862 [2024-12-13 03:49:06.793549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.862 qpair failed and we were unable to recover it. 00:38:05.862 [2024-12-13 03:49:06.793750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.862 [2024-12-13 03:49:06.793792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.862 qpair failed and we were unable to recover it. 00:38:05.862 [2024-12-13 03:49:06.794016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.862 [2024-12-13 03:49:06.794058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.862 qpair failed and we were unable to recover it. 00:38:05.862 [2024-12-13 03:49:06.794156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.862 [2024-12-13 03:49:06.794172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.862 qpair failed and we were unable to recover it. 00:38:05.862 [2024-12-13 03:49:06.794304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.862 [2024-12-13 03:49:06.794318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.862 qpair failed and we were unable to recover it. 00:38:05.862 [2024-12-13 03:49:06.794571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.862 [2024-12-13 03:49:06.794585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.862 qpair failed and we were unable to recover it. 00:38:05.862 [2024-12-13 03:49:06.794812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.862 [2024-12-13 03:49:06.794826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.862 qpair failed and we were unable to recover it. 00:38:05.862 [2024-12-13 03:49:06.794923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.862 [2024-12-13 03:49:06.794936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.862 qpair failed and we were unable to recover it. 00:38:05.862 [2024-12-13 03:49:06.795113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.862 [2024-12-13 03:49:06.795127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.862 qpair failed and we were unable to recover it. 00:38:05.862 [2024-12-13 03:49:06.795309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.862 [2024-12-13 03:49:06.795350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.862 qpair failed and we were unable to recover it. 00:38:05.862 [2024-12-13 03:49:06.795554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.862 [2024-12-13 03:49:06.795595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.862 qpair failed and we were unable to recover it. 00:38:05.862 [2024-12-13 03:49:06.795812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.862 [2024-12-13 03:49:06.795855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.862 qpair failed and we were unable to recover it. 00:38:05.862 [2024-12-13 03:49:06.796091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.862 [2024-12-13 03:49:06.796106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.862 qpair failed and we were unable to recover it. 00:38:05.862 [2024-12-13 03:49:06.796286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.862 [2024-12-13 03:49:06.796300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.862 qpair failed and we were unable to recover it. 00:38:05.862 [2024-12-13 03:49:06.796395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.862 [2024-12-13 03:49:06.796407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.862 qpair failed and we were unable to recover it. 00:38:05.862 [2024-12-13 03:49:06.796633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.862 [2024-12-13 03:49:06.796646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.862 qpair failed and we were unable to recover it. 00:38:05.862 [2024-12-13 03:49:06.796807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.862 [2024-12-13 03:49:06.796825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.862 qpair failed and we were unable to recover it. 00:38:05.862 [2024-12-13 03:49:06.797003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.862 [2024-12-13 03:49:06.797016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.862 qpair failed and we were unable to recover it. 00:38:05.862 [2024-12-13 03:49:06.797237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.862 [2024-12-13 03:49:06.797252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.862 qpair failed and we were unable to recover it. 00:38:05.862 [2024-12-13 03:49:06.797407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.862 [2024-12-13 03:49:06.797421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.862 qpair failed and we were unable to recover it. 00:38:05.862 [2024-12-13 03:49:06.797561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.797575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.797713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.797726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.797822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.797835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.798053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.798072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.798302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.798317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.798538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.798581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.798840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.798854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.799087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.799102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.799194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.799207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.799382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.799396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.799544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.799558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.799715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.799728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.799864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.799879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.800116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.800159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.800295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.800348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.800551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.800592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.800802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.800844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.801161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.801212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.801497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.801540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.801737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.801751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.801838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.801851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.801991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.802005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.802260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.802274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.802412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.802425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.802608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.802623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.802689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.802702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.802871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.802884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.803102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.803145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.803370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.803412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.803745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.803789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.803940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.803984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.804213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.804256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.804447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.804490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.804720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.804762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.805010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.805024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.805271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.805286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.805441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.805457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.805594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.805607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.863 [2024-12-13 03:49:06.805846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.863 [2024-12-13 03:49:06.805861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.863 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.806036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.806055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.806211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.806225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.806385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.806400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.806471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.806483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.806646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.806660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.806881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.806933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.807221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.807263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.807496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.807540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.807733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.807775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.808001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.808043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.808241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.808255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.808491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.808534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.808745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.808786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.809085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.809131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.809322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.809364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.809624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.809666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.809793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.809834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.810029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.810067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.810310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.810324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.810420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.810433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.810673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.810687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.810938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.810953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.811028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.811041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.811139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.811152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.811354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.811369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.811570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.811583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.811740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.811755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.811853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.811865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.812001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.812015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.812135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.812149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.812309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.812322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.812453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.812467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.812689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.812702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.812854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.812868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.812981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.812998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.813263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.813277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.813423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.813448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.813612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.813629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.864 qpair failed and we were unable to recover it. 00:38:05.864 [2024-12-13 03:49:06.813791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.864 [2024-12-13 03:49:06.813806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.813891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.813903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.814123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.814169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.814442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.814529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.814845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.814948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.815169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.815186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.815334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.815349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.815573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.815587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.815732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.815746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.815990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.816005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.816231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.816245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.816391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.816405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.816628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.816641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.816809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.816832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.817061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.817075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.817245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.817260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.817468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.817510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.817739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.817782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.818090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.818105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.818193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.818206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.818347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.818361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.818606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.818620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.818839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.818854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.819079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.819094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.819249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.819264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.819403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.819416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.819605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.819632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.819814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.819844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.819979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.820007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.820168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.820185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.820406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.820465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.820709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.820756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.820986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.821007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.821161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.821175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.821398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.821413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.821510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.821523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.821671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.821685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.821838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.821852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.822023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.865 [2024-12-13 03:49:06.822036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.865 qpair failed and we were unable to recover it. 00:38:05.865 [2024-12-13 03:49:06.822205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-12-13 03:49:06.822221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.866 qpair failed and we were unable to recover it. 00:38:05.866 [2024-12-13 03:49:06.822434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-12-13 03:49:06.822448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.866 qpair failed and we were unable to recover it. 00:38:05.866 [2024-12-13 03:49:06.822547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-12-13 03:49:06.822570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.866 qpair failed and we were unable to recover it. 00:38:05.866 [2024-12-13 03:49:06.822779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-12-13 03:49:06.822794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.866 qpair failed and we were unable to recover it. 00:38:05.866 [2024-12-13 03:49:06.822945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-12-13 03:49:06.822960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.866 qpair failed and we were unable to recover it. 00:38:05.866 [2024-12-13 03:49:06.823057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-12-13 03:49:06.823070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.866 qpair failed and we were unable to recover it. 00:38:05.866 [2024-12-13 03:49:06.823168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-12-13 03:49:06.823181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.866 qpair failed and we were unable to recover it. 00:38:05.866 [2024-12-13 03:49:06.823428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-12-13 03:49:06.823441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.866 qpair failed and we were unable to recover it. 00:38:05.866 [2024-12-13 03:49:06.823590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-12-13 03:49:06.823603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.866 qpair failed and we were unable to recover it. 00:38:05.866 [2024-12-13 03:49:06.823751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-12-13 03:49:06.823775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.866 qpair failed and we were unable to recover it. 00:38:05.866 [2024-12-13 03:49:06.823976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-12-13 03:49:06.823990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.866 qpair failed and we were unable to recover it. 00:38:05.866 [2024-12-13 03:49:06.824282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-12-13 03:49:06.824324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.866 qpair failed and we were unable to recover it. 00:38:05.866 [2024-12-13 03:49:06.824463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-12-13 03:49:06.824507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.866 qpair failed and we were unable to recover it. 00:38:05.866 [2024-12-13 03:49:06.824712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-12-13 03:49:06.824755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.866 qpair failed and we were unable to recover it. 00:38:05.866 [2024-12-13 03:49:06.824953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-12-13 03:49:06.825002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.866 qpair failed and we were unable to recover it. 00:38:05.866 [2024-12-13 03:49:06.825209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-12-13 03:49:06.825223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.866 qpair failed and we were unable to recover it. 00:38:05.866 [2024-12-13 03:49:06.825401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-12-13 03:49:06.825415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.866 qpair failed and we were unable to recover it. 00:38:05.866 [2024-12-13 03:49:06.825680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-12-13 03:49:06.825693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.866 qpair failed and we were unable to recover it. 00:38:05.866 [2024-12-13 03:49:06.825786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-12-13 03:49:06.825800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.866 qpair failed and we were unable to recover it. 00:38:05.866 [2024-12-13 03:49:06.825873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-12-13 03:49:06.825887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.866 qpair failed and we were unable to recover it. 00:38:05.866 [2024-12-13 03:49:06.826039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-12-13 03:49:06.826053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.866 qpair failed and we were unable to recover it. 00:38:05.866 [2024-12-13 03:49:06.826261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-12-13 03:49:06.826275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.866 qpair failed and we were unable to recover it. 00:38:05.866 [2024-12-13 03:49:06.826428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-12-13 03:49:06.826441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.866 qpair failed and we were unable to recover it. 00:38:05.866 [2024-12-13 03:49:06.826677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-12-13 03:49:06.826718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.866 qpair failed and we were unable to recover it. 00:38:05.866 [2024-12-13 03:49:06.826968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-12-13 03:49:06.827012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.866 qpair failed and we were unable to recover it. 00:38:05.866 [2024-12-13 03:49:06.827305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-12-13 03:49:06.827349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.866 qpair failed and we were unable to recover it. 00:38:05.866 [2024-12-13 03:49:06.827561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-12-13 03:49:06.827602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.866 qpair failed and we were unable to recover it. 00:38:05.866 [2024-12-13 03:49:06.827811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-12-13 03:49:06.827824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.866 qpair failed and we were unable to recover it. 00:38:05.866 [2024-12-13 03:49:06.828036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-12-13 03:49:06.828051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.866 qpair failed and we were unable to recover it. 00:38:05.866 [2024-12-13 03:49:06.828192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-12-13 03:49:06.828207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.866 qpair failed and we were unable to recover it. 00:38:05.866 [2024-12-13 03:49:06.828420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-12-13 03:49:06.828434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.866 qpair failed and we were unable to recover it. 00:38:05.866 [2024-12-13 03:49:06.828516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-12-13 03:49:06.828529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.866 qpair failed and we were unable to recover it. 00:38:05.866 [2024-12-13 03:49:06.828737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.866 [2024-12-13 03:49:06.828752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.866 qpair failed and we were unable to recover it. 00:38:05.866 [2024-12-13 03:49:06.828886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-12-13 03:49:06.828900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.867 qpair failed and we were unable to recover it. 00:38:05.867 [2024-12-13 03:49:06.829121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-12-13 03:49:06.829137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.867 qpair failed and we were unable to recover it. 00:38:05.867 [2024-12-13 03:49:06.829401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-12-13 03:49:06.829414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.867 qpair failed and we were unable to recover it. 00:38:05.867 [2024-12-13 03:49:06.829496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-12-13 03:49:06.829509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.867 qpair failed and we were unable to recover it. 00:38:05.867 [2024-12-13 03:49:06.829753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-12-13 03:49:06.829767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.867 qpair failed and we were unable to recover it. 00:38:05.867 [2024-12-13 03:49:06.829852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-12-13 03:49:06.829865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.867 qpair failed and we were unable to recover it. 00:38:05.867 [2024-12-13 03:49:06.830035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-12-13 03:49:06.830048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.867 qpair failed and we were unable to recover it. 00:38:05.867 [2024-12-13 03:49:06.830222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-12-13 03:49:06.830239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.867 qpair failed and we were unable to recover it. 00:38:05.867 [2024-12-13 03:49:06.830541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-12-13 03:49:06.830585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.867 qpair failed and we were unable to recover it. 00:38:05.867 [2024-12-13 03:49:06.830886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-12-13 03:49:06.830939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.867 qpair failed and we were unable to recover it. 00:38:05.867 [2024-12-13 03:49:06.831250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-12-13 03:49:06.831294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.867 qpair failed and we were unable to recover it. 00:38:05.867 [2024-12-13 03:49:06.831562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-12-13 03:49:06.831603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.867 qpair failed and we were unable to recover it. 00:38:05.867 [2024-12-13 03:49:06.831811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-12-13 03:49:06.831853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.867 qpair failed and we were unable to recover it. 00:38:05.867 [2024-12-13 03:49:06.832089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-12-13 03:49:06.832104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.867 qpair failed and we were unable to recover it. 00:38:05.867 [2024-12-13 03:49:06.832249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-12-13 03:49:06.832263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.867 qpair failed and we were unable to recover it. 00:38:05.867 [2024-12-13 03:49:06.832438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-12-13 03:49:06.832452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.867 qpair failed and we were unable to recover it. 00:38:05.867 [2024-12-13 03:49:06.832543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-12-13 03:49:06.832556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.867 qpair failed and we were unable to recover it. 00:38:05.867 [2024-12-13 03:49:06.832694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-12-13 03:49:06.832708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.867 qpair failed and we were unable to recover it. 00:38:05.867 [2024-12-13 03:49:06.832938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-12-13 03:49:06.832953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.867 qpair failed and we were unable to recover it. 00:38:05.867 [2024-12-13 03:49:06.833177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-12-13 03:49:06.833235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.867 qpair failed and we were unable to recover it. 00:38:05.867 [2024-12-13 03:49:06.833545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-12-13 03:49:06.833588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.867 qpair failed and we were unable to recover it. 00:38:05.867 [2024-12-13 03:49:06.833809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-12-13 03:49:06.833854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.867 qpair failed and we were unable to recover it. 00:38:05.867 [2024-12-13 03:49:06.834100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-12-13 03:49:06.834144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.867 qpair failed and we were unable to recover it. 00:38:05.867 [2024-12-13 03:49:06.834462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-12-13 03:49:06.834479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.867 qpair failed and we were unable to recover it. 00:38:05.867 [2024-12-13 03:49:06.834644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-12-13 03:49:06.834664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.867 qpair failed and we were unable to recover it. 00:38:05.867 [2024-12-13 03:49:06.834824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-12-13 03:49:06.834838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.867 qpair failed and we were unable to recover it. 00:38:05.867 [2024-12-13 03:49:06.834930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-12-13 03:49:06.834954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.867 qpair failed and we were unable to recover it. 00:38:05.867 [2024-12-13 03:49:06.835131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-12-13 03:49:06.835145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.867 qpair failed and we were unable to recover it. 00:38:05.867 [2024-12-13 03:49:06.835377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-12-13 03:49:06.835421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.867 qpair failed and we were unable to recover it. 00:38:05.867 [2024-12-13 03:49:06.835719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-12-13 03:49:06.835761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.867 qpair failed and we were unable to recover it. 00:38:05.867 [2024-12-13 03:49:06.835988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-12-13 03:49:06.836003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.867 qpair failed and we were unable to recover it. 00:38:05.867 [2024-12-13 03:49:06.836148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-12-13 03:49:06.836164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.867 qpair failed and we were unable to recover it. 00:38:05.867 [2024-12-13 03:49:06.836249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-12-13 03:49:06.836262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.867 qpair failed and we were unable to recover it. 00:38:05.867 [2024-12-13 03:49:06.836488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-12-13 03:49:06.836501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.867 qpair failed and we were unable to recover it. 00:38:05.867 [2024-12-13 03:49:06.836729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-12-13 03:49:06.836746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.867 qpair failed and we were unable to recover it. 00:38:05.867 [2024-12-13 03:49:06.836906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.867 [2024-12-13 03:49:06.836926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.867 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.837082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.837096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.837242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.837259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.837460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.837475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.837641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.837654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.837741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.837755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.837993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.838007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.838258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.838272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.838463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.838476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.838700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.838714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.838850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.838864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.839005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.839019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.839195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.839210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.839457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.839471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.839692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.839706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.839869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.839883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.840028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.840042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.840119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.840132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.840280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.840294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.840512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.840527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.840739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.840754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.840929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.840944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.841187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.841230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.841376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.841419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.841735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.841778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.842063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.842105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.842334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.842378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.842590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.842640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.842800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.842842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.843071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.843086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.843315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.843329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.843422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.843436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.843635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.843649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.843809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.843823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.844002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.844016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.844170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.844184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.844285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.844299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.844503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.844517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.844717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.844730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.868 [2024-12-13 03:49:06.844908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.868 [2024-12-13 03:49:06.844933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.868 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.845088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.845102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.845198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.845224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.845427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.845442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.845581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.845596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.845741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.845755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.845937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.845952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.846166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.846209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.846428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.846472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.846687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.846730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.847035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.847081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.847373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.847387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.847628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.847641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.847889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.847903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.848102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.848121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.848269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.848284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.848458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.848472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.848684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.848729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.849064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.849110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.849274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.849288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.849442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.849455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.849618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.849632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.849843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.849867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.850064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.850078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.850332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.850376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.850547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.850593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.850809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.850852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.851156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.851200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.851485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.851531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.851740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.851784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.852045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.852090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.852270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.852285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.852496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.852540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.852685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.852727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.853007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.853052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.853179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.853195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.853442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.853456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.853618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.853631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.853794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.853808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.869 qpair failed and we were unable to recover it. 00:38:05.869 [2024-12-13 03:49:06.853959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.869 [2024-12-13 03:49:06.853972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.870 qpair failed and we were unable to recover it. 00:38:05.870 [2024-12-13 03:49:06.854110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.870 [2024-12-13 03:49:06.854127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.870 qpair failed and we were unable to recover it. 00:38:05.870 [2024-12-13 03:49:06.854357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.870 [2024-12-13 03:49:06.854371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.870 qpair failed and we were unable to recover it. 00:38:05.870 [2024-12-13 03:49:06.854585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.870 [2024-12-13 03:49:06.854599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.870 qpair failed and we were unable to recover it. 00:38:05.870 [2024-12-13 03:49:06.854775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.870 [2024-12-13 03:49:06.854789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.870 qpair failed and we were unable to recover it. 00:38:05.870 [2024-12-13 03:49:06.854890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.870 [2024-12-13 03:49:06.854902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.870 qpair failed and we were unable to recover it. 00:38:05.870 [2024-12-13 03:49:06.855192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.870 [2024-12-13 03:49:06.855206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.870 qpair failed and we were unable to recover it. 00:38:05.870 [2024-12-13 03:49:06.855399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.870 [2024-12-13 03:49:06.855423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.870 qpair failed and we were unable to recover it. 00:38:05.870 [2024-12-13 03:49:06.855521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.870 [2024-12-13 03:49:06.855535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.870 qpair failed and we were unable to recover it. 00:38:05.870 [2024-12-13 03:49:06.855744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.870 [2024-12-13 03:49:06.855759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.870 qpair failed and we were unable to recover it. 00:38:05.870 [2024-12-13 03:49:06.855927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.870 [2024-12-13 03:49:06.855943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.870 qpair failed and we were unable to recover it. 00:38:05.870 [2024-12-13 03:49:06.856083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.870 [2024-12-13 03:49:06.856097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.870 qpair failed and we were unable to recover it. 00:38:05.870 [2024-12-13 03:49:06.856332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.870 [2024-12-13 03:49:06.856346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.870 qpair failed and we were unable to recover it. 00:38:05.870 [2024-12-13 03:49:06.856487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.870 [2024-12-13 03:49:06.856512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.870 qpair failed and we were unable to recover it. 00:38:05.870 [2024-12-13 03:49:06.856590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.870 [2024-12-13 03:49:06.856602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.870 qpair failed and we were unable to recover it. 00:38:05.870 [2024-12-13 03:49:06.856765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.870 [2024-12-13 03:49:06.856779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.870 qpair failed and we were unable to recover it. 00:38:05.870 [2024-12-13 03:49:06.856929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.870 [2024-12-13 03:49:06.856944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.870 qpair failed and we were unable to recover it. 00:38:05.870 [2024-12-13 03:49:06.857033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.870 [2024-12-13 03:49:06.857046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.870 qpair failed and we were unable to recover it. 00:38:05.870 [2024-12-13 03:49:06.857271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.870 [2024-12-13 03:49:06.857285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.870 qpair failed and we were unable to recover it. 00:38:05.870 [2024-12-13 03:49:06.857455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.870 [2024-12-13 03:49:06.857470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.870 qpair failed and we were unable to recover it. 00:38:05.870 [2024-12-13 03:49:06.857674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.870 [2024-12-13 03:49:06.857717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.870 qpair failed and we were unable to recover it. 00:38:05.870 [2024-12-13 03:49:06.858018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.870 [2024-12-13 03:49:06.858065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.870 qpair failed and we were unable to recover it. 00:38:05.870 [2024-12-13 03:49:06.858367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.870 [2024-12-13 03:49:06.858387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.870 qpair failed and we were unable to recover it. 00:38:05.870 [2024-12-13 03:49:06.858530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.870 [2024-12-13 03:49:06.858543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.870 qpair failed and we were unable to recover it. 00:38:05.870 [2024-12-13 03:49:06.858784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.870 [2024-12-13 03:49:06.858798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.870 qpair failed and we were unable to recover it. 00:38:05.870 [2024-12-13 03:49:06.859026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.870 [2024-12-13 03:49:06.859065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.870 qpair failed and we were unable to recover it. 00:38:05.870 [2024-12-13 03:49:06.859358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.870 [2024-12-13 03:49:06.859401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.870 qpair failed and we were unable to recover it. 00:38:05.870 [2024-12-13 03:49:06.859706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.870 [2024-12-13 03:49:06.859752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.870 qpair failed and we were unable to recover it. 00:38:05.870 [2024-12-13 03:49:06.859994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.870 [2024-12-13 03:49:06.860039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.870 qpair failed and we were unable to recover it. 00:38:05.870 [2024-12-13 03:49:06.860296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.870 [2024-12-13 03:49:06.860311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.870 qpair failed and we were unable to recover it. 00:38:05.870 [2024-12-13 03:49:06.860489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.870 [2024-12-13 03:49:06.860503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.870 qpair failed and we were unable to recover it. 00:38:05.870 [2024-12-13 03:49:06.860589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.870 [2024-12-13 03:49:06.860601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.870 qpair failed and we were unable to recover it. 00:38:05.870 [2024-12-13 03:49:06.860835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.870 [2024-12-13 03:49:06.860850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.870 qpair failed and we were unable to recover it. 00:38:05.870 [2024-12-13 03:49:06.860942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.870 [2024-12-13 03:49:06.860956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.870 qpair failed and we were unable to recover it. 00:38:05.870 [2024-12-13 03:49:06.861132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.870 [2024-12-13 03:49:06.861148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.870 qpair failed and we were unable to recover it. 00:38:05.870 [2024-12-13 03:49:06.861372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.870 [2024-12-13 03:49:06.861388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.870 qpair failed and we were unable to recover it. 00:38:05.870 [2024-12-13 03:49:06.861460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.870 [2024-12-13 03:49:06.861473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.870 qpair failed and we were unable to recover it. 00:38:05.870 [2024-12-13 03:49:06.861630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.871 [2024-12-13 03:49:06.861644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.871 qpair failed and we were unable to recover it. 00:38:05.871 [2024-12-13 03:49:06.861810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.871 [2024-12-13 03:49:06.861824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.871 qpair failed and we were unable to recover it. 00:38:05.871 [2024-12-13 03:49:06.861961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.871 [2024-12-13 03:49:06.861980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.871 qpair failed and we were unable to recover it. 00:38:05.871 [2024-12-13 03:49:06.862160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.871 [2024-12-13 03:49:06.862174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.871 qpair failed and we were unable to recover it. 00:38:05.871 [2024-12-13 03:49:06.862342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.871 [2024-12-13 03:49:06.862359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.871 qpair failed and we were unable to recover it. 00:38:05.871 [2024-12-13 03:49:06.862569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.871 [2024-12-13 03:49:06.862585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.871 qpair failed and we were unable to recover it. 00:38:05.871 [2024-12-13 03:49:06.862832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.871 [2024-12-13 03:49:06.862846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.871 qpair failed and we were unable to recover it. 00:38:05.871 [2024-12-13 03:49:06.862991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.871 [2024-12-13 03:49:06.863005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.871 qpair failed and we were unable to recover it. 00:38:05.871 [2024-12-13 03:49:06.863097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.871 [2024-12-13 03:49:06.863110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.871 qpair failed and we were unable to recover it. 00:38:05.871 [2024-12-13 03:49:06.863352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.871 [2024-12-13 03:49:06.863367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.871 qpair failed and we were unable to recover it. 00:38:05.871 [2024-12-13 03:49:06.863454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.871 [2024-12-13 03:49:06.863466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.871 qpair failed and we were unable to recover it. 00:38:05.871 [2024-12-13 03:49:06.863625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.871 [2024-12-13 03:49:06.863638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.871 qpair failed and we were unable to recover it. 00:38:05.871 [2024-12-13 03:49:06.863787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.871 [2024-12-13 03:49:06.863801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.871 qpair failed and we were unable to recover it. 00:38:05.871 [2024-12-13 03:49:06.864041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.871 [2024-12-13 03:49:06.864088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.871 qpair failed and we were unable to recover it. 00:38:05.871 [2024-12-13 03:49:06.864289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.871 [2024-12-13 03:49:06.864331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.871 qpair failed and we were unable to recover it. 00:38:05.871 [2024-12-13 03:49:06.864472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.871 [2024-12-13 03:49:06.864514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.871 qpair failed and we were unable to recover it. 00:38:05.871 [2024-12-13 03:49:06.864785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.871 [2024-12-13 03:49:06.864826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.871 qpair failed and we were unable to recover it. 00:38:05.871 [2024-12-13 03:49:06.865049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.871 [2024-12-13 03:49:06.865064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.871 qpair failed and we were unable to recover it. 00:38:05.871 [2024-12-13 03:49:06.865304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.871 [2024-12-13 03:49:06.865318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.871 qpair failed and we were unable to recover it. 00:38:05.871 [2024-12-13 03:49:06.865470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.871 [2024-12-13 03:49:06.865484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.871 qpair failed and we were unable to recover it. 00:38:05.871 [2024-12-13 03:49:06.865691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.871 [2024-12-13 03:49:06.865705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.871 qpair failed and we were unable to recover it. 00:38:05.871 [2024-12-13 03:49:06.865860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.871 [2024-12-13 03:49:06.865875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.871 qpair failed and we were unable to recover it. 00:38:05.871 [2024-12-13 03:49:06.866014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.871 [2024-12-13 03:49:06.866028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.871 qpair failed and we were unable to recover it. 00:38:05.871 [2024-12-13 03:49:06.866250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.871 [2024-12-13 03:49:06.866264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.871 qpair failed and we were unable to recover it. 00:38:05.871 [2024-12-13 03:49:06.866410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.871 [2024-12-13 03:49:06.866423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.871 qpair failed and we were unable to recover it. 00:38:05.871 [2024-12-13 03:49:06.866603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.871 [2024-12-13 03:49:06.866645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.871 qpair failed and we were unable to recover it. 00:38:05.871 [2024-12-13 03:49:06.866801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.871 [2024-12-13 03:49:06.866842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.871 qpair failed and we were unable to recover it. 00:38:05.871 [2024-12-13 03:49:06.866990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.871 [2024-12-13 03:49:06.867032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.871 qpair failed and we were unable to recover it. 00:38:05.871 [2024-12-13 03:49:06.867238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.871 [2024-12-13 03:49:06.867252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.871 qpair failed and we were unable to recover it. 00:38:05.871 [2024-12-13 03:49:06.867495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.871 [2024-12-13 03:49:06.867510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.871 qpair failed and we were unable to recover it. 00:38:05.871 [2024-12-13 03:49:06.867682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.871 [2024-12-13 03:49:06.867697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.871 qpair failed and we were unable to recover it. 00:38:05.871 [2024-12-13 03:49:06.867876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.871 [2024-12-13 03:49:06.867931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.871 qpair failed and we were unable to recover it. 00:38:05.871 [2024-12-13 03:49:06.868139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.871 [2024-12-13 03:49:06.868181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.871 qpair failed and we were unable to recover it. 00:38:05.871 [2024-12-13 03:49:06.868495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.871 [2024-12-13 03:49:06.868537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.871 qpair failed and we were unable to recover it. 00:38:05.871 [2024-12-13 03:49:06.868691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.871 [2024-12-13 03:49:06.868732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.868943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.868985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.869174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.869188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.869343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.869358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.869497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.869512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.869661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.869674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.869822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.869837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.869927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.869942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.870046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.870059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.870151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.870165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.870238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.870253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.870484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.870500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.870647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.870660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.870750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.870763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.870908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.870929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.871007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.871020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.871240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.871254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.871325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.871338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.871481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.871494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.871730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.871745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.871844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.871857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.872009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.872025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.872102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.872114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.872266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.872282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.872419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.872433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.872595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.872609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.872815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.872829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.872969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.872984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.873079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.873092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.873166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.873182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.873369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.873384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.873549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.873565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.873663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.873682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.873855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.873869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.873960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.873973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.874116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.874152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.874293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.874308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.874472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.874487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.872 qpair failed and we were unable to recover it. 00:38:05.872 [2024-12-13 03:49:06.874640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.872 [2024-12-13 03:49:06.874655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.873 qpair failed and we were unable to recover it. 00:38:05.873 [2024-12-13 03:49:06.874725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.873 [2024-12-13 03:49:06.874738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.873 qpair failed and we were unable to recover it. 00:38:05.873 [2024-12-13 03:49:06.874939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.873 [2024-12-13 03:49:06.874954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.873 qpair failed and we were unable to recover it. 00:38:05.873 [2024-12-13 03:49:06.875045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.873 [2024-12-13 03:49:06.875058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.873 qpair failed and we were unable to recover it. 00:38:05.873 [2024-12-13 03:49:06.875205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.873 [2024-12-13 03:49:06.875220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.873 qpair failed and we were unable to recover it. 00:38:05.873 [2024-12-13 03:49:06.875304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.873 [2024-12-13 03:49:06.875317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.873 qpair failed and we were unable to recover it. 00:38:05.873 [2024-12-13 03:49:06.875436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.873 [2024-12-13 03:49:06.875452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.873 qpair failed and we were unable to recover it. 00:38:05.873 [2024-12-13 03:49:06.875620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.873 [2024-12-13 03:49:06.875633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.873 qpair failed and we were unable to recover it. 00:38:05.873 [2024-12-13 03:49:06.875804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.873 [2024-12-13 03:49:06.875819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.873 qpair failed and we were unable to recover it. 00:38:05.873 [2024-12-13 03:49:06.875923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.873 [2024-12-13 03:49:06.875936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.873 qpair failed and we were unable to recover it. 00:38:05.873 [2024-12-13 03:49:06.876091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.873 [2024-12-13 03:49:06.876105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.873 qpair failed and we were unable to recover it. 00:38:05.873 [2024-12-13 03:49:06.876267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.873 [2024-12-13 03:49:06.876282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.873 qpair failed and we were unable to recover it. 00:38:05.873 [2024-12-13 03:49:06.876368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.873 [2024-12-13 03:49:06.876385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.873 qpair failed and we were unable to recover it. 00:38:05.873 [2024-12-13 03:49:06.876586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.873 [2024-12-13 03:49:06.876630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.873 qpair failed and we were unable to recover it. 00:38:05.873 [2024-12-13 03:49:06.876783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.873 [2024-12-13 03:49:06.876828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.873 qpair failed and we were unable to recover it. 00:38:05.873 [2024-12-13 03:49:06.877137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.873 [2024-12-13 03:49:06.877184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.873 qpair failed and we were unable to recover it. 00:38:05.873 [2024-12-13 03:49:06.877467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.873 [2024-12-13 03:49:06.877518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.873 qpair failed and we were unable to recover it. 00:38:05.873 [2024-12-13 03:49:06.877803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.873 [2024-12-13 03:49:06.877847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.873 qpair failed and we were unable to recover it. 00:38:05.873 [2024-12-13 03:49:06.878079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.873 [2024-12-13 03:49:06.878094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.873 qpair failed and we were unable to recover it. 00:38:05.873 [2024-12-13 03:49:06.878312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.873 [2024-12-13 03:49:06.878326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.873 qpair failed and we were unable to recover it. 00:38:05.873 [2024-12-13 03:49:06.878465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.873 [2024-12-13 03:49:06.878480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.873 qpair failed and we were unable to recover it. 00:38:05.873 [2024-12-13 03:49:06.878726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.873 [2024-12-13 03:49:06.878740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.873 qpair failed and we were unable to recover it. 00:38:05.873 [2024-12-13 03:49:06.878826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.873 [2024-12-13 03:49:06.878840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.873 qpair failed and we were unable to recover it. 00:38:05.873 [2024-12-13 03:49:06.879004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.873 [2024-12-13 03:49:06.879019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.873 qpair failed and we were unable to recover it. 00:38:05.873 [2024-12-13 03:49:06.879244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.873 [2024-12-13 03:49:06.879258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.873 qpair failed and we were unable to recover it. 00:38:05.873 [2024-12-13 03:49:06.879418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.873 [2024-12-13 03:49:06.879432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.873 qpair failed and we were unable to recover it. 00:38:05.873 [2024-12-13 03:49:06.879664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.873 [2024-12-13 03:49:06.879679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.873 qpair failed and we were unable to recover it. 00:38:05.873 [2024-12-13 03:49:06.879928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.873 [2024-12-13 03:49:06.879942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.873 qpair failed and we were unable to recover it. 00:38:05.873 [2024-12-13 03:49:06.880036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.873 [2024-12-13 03:49:06.880050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.873 qpair failed and we were unable to recover it. 00:38:05.873 [2024-12-13 03:49:06.880282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.873 [2024-12-13 03:49:06.880296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.873 qpair failed and we were unable to recover it. 00:38:05.873 [2024-12-13 03:49:06.880499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.873 [2024-12-13 03:49:06.880514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.873 qpair failed and we were unable to recover it. 00:38:05.873 [2024-12-13 03:49:06.880756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.873 [2024-12-13 03:49:06.880769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.873 qpair failed and we were unable to recover it. 00:38:05.873 [2024-12-13 03:49:06.880934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.873 [2024-12-13 03:49:06.880949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.873 qpair failed and we were unable to recover it. 00:38:05.873 [2024-12-13 03:49:06.881213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.874 [2024-12-13 03:49:06.881258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.874 qpair failed and we were unable to recover it. 00:38:05.874 [2024-12-13 03:49:06.881494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.874 [2024-12-13 03:49:06.881539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.874 qpair failed and we were unable to recover it. 00:38:05.874 [2024-12-13 03:49:06.881772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.874 [2024-12-13 03:49:06.881815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.874 qpair failed and we were unable to recover it. 00:38:05.874 [2024-12-13 03:49:06.882089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.874 [2024-12-13 03:49:06.882138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.874 qpair failed and we were unable to recover it. 00:38:05.874 [2024-12-13 03:49:06.882299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.874 [2024-12-13 03:49:06.882314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.874 qpair failed and we were unable to recover it. 00:38:05.874 [2024-12-13 03:49:06.882465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.874 [2024-12-13 03:49:06.882479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.874 qpair failed and we were unable to recover it. 00:38:05.874 [2024-12-13 03:49:06.882597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.874 [2024-12-13 03:49:06.882610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.874 qpair failed and we were unable to recover it. 00:38:05.874 [2024-12-13 03:49:06.882763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.874 [2024-12-13 03:49:06.882777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.874 qpair failed and we were unable to recover it. 00:38:05.874 [2024-12-13 03:49:06.882947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.874 [2024-12-13 03:49:06.882962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.874 qpair failed and we were unable to recover it. 00:38:05.874 [2024-12-13 03:49:06.883153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.874 [2024-12-13 03:49:06.883168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.874 qpair failed and we were unable to recover it. 00:38:05.874 [2024-12-13 03:49:06.883354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.874 [2024-12-13 03:49:06.883368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.874 qpair failed and we were unable to recover it. 00:38:05.874 [2024-12-13 03:49:06.883617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.874 [2024-12-13 03:49:06.883661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.874 qpair failed and we were unable to recover it. 00:38:05.874 [2024-12-13 03:49:06.883890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.874 [2024-12-13 03:49:06.883964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.874 qpair failed and we were unable to recover it. 00:38:05.874 [2024-12-13 03:49:06.884259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.874 [2024-12-13 03:49:06.884274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.874 qpair failed and we were unable to recover it. 00:38:05.874 [2024-12-13 03:49:06.884459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.874 [2024-12-13 03:49:06.884473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.874 qpair failed and we were unable to recover it. 00:38:05.874 [2024-12-13 03:49:06.884699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.874 [2024-12-13 03:49:06.884713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.874 qpair failed and we were unable to recover it. 00:38:05.874 [2024-12-13 03:49:06.884960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.874 [2024-12-13 03:49:06.884975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.874 qpair failed and we were unable to recover it. 00:38:05.874 [2024-12-13 03:49:06.885179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.874 [2024-12-13 03:49:06.885193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.874 qpair failed and we were unable to recover it. 00:38:05.874 [2024-12-13 03:49:06.885363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.874 [2024-12-13 03:49:06.885407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.874 qpair failed and we were unable to recover it. 00:38:05.874 [2024-12-13 03:49:06.885628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.874 [2024-12-13 03:49:06.885676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.874 qpair failed and we were unable to recover it. 00:38:05.874 [2024-12-13 03:49:06.885961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.874 [2024-12-13 03:49:06.886005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.874 qpair failed and we were unable to recover it. 00:38:05.874 [2024-12-13 03:49:06.886353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.874 [2024-12-13 03:49:06.886410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.874 qpair failed and we were unable to recover it. 00:38:05.874 [2024-12-13 03:49:06.886615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.874 [2024-12-13 03:49:06.886629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.874 qpair failed and we were unable to recover it. 00:38:05.874 [2024-12-13 03:49:06.886718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.874 [2024-12-13 03:49:06.886732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.874 qpair failed and we were unable to recover it. 00:38:05.874 [2024-12-13 03:49:06.886902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.874 [2024-12-13 03:49:06.886928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.874 qpair failed and we were unable to recover it. 00:38:05.874 [2024-12-13 03:49:06.887158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.874 [2024-12-13 03:49:06.887184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.874 qpair failed and we were unable to recover it. 00:38:05.874 [2024-12-13 03:49:06.887337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.874 [2024-12-13 03:49:06.887351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.874 qpair failed and we were unable to recover it. 00:38:05.874 [2024-12-13 03:49:06.887585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.874 [2024-12-13 03:49:06.887599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.874 qpair failed and we were unable to recover it. 00:38:05.874 [2024-12-13 03:49:06.887844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.874 [2024-12-13 03:49:06.887902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.874 qpair failed and we were unable to recover it. 00:38:05.874 [2024-12-13 03:49:06.888060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.874 [2024-12-13 03:49:06.888103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.874 qpair failed and we were unable to recover it. 00:38:05.874 [2024-12-13 03:49:06.888318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.874 [2024-12-13 03:49:06.888370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.874 qpair failed and we were unable to recover it. 00:38:05.874 [2024-12-13 03:49:06.888531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.874 [2024-12-13 03:49:06.888546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.874 qpair failed and we were unable to recover it. 00:38:05.874 [2024-12-13 03:49:06.888638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.874 [2024-12-13 03:49:06.888651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.874 qpair failed and we were unable to recover it. 00:38:05.874 [2024-12-13 03:49:06.888834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.874 [2024-12-13 03:49:06.888849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.874 qpair failed and we were unable to recover it. 00:38:05.874 [2024-12-13 03:49:06.888939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.874 [2024-12-13 03:49:06.888952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.874 qpair failed and we were unable to recover it. 00:38:05.874 [2024-12-13 03:49:06.889164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.874 [2024-12-13 03:49:06.889177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.874 qpair failed and we were unable to recover it. 00:38:05.874 [2024-12-13 03:49:06.889402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.874 [2024-12-13 03:49:06.889415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.874 qpair failed and we were unable to recover it. 00:38:05.874 [2024-12-13 03:49:06.889644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.875 [2024-12-13 03:49:06.889658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.875 qpair failed and we were unable to recover it. 00:38:05.875 [2024-12-13 03:49:06.889927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.875 [2024-12-13 03:49:06.889941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.875 qpair failed and we were unable to recover it. 00:38:05.875 [2024-12-13 03:49:06.890158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.875 [2024-12-13 03:49:06.890173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.875 qpair failed and we were unable to recover it. 00:38:05.875 [2024-12-13 03:49:06.890242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.875 [2024-12-13 03:49:06.890254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.875 qpair failed and we were unable to recover it. 00:38:05.875 [2024-12-13 03:49:06.890411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.875 [2024-12-13 03:49:06.890424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.875 qpair failed and we were unable to recover it. 00:38:05.875 [2024-12-13 03:49:06.890646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.875 [2024-12-13 03:49:06.890687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.875 qpair failed and we were unable to recover it. 00:38:05.875 [2024-12-13 03:49:06.890965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.875 [2024-12-13 03:49:06.891010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.875 qpair failed and we were unable to recover it. 00:38:05.875 [2024-12-13 03:49:06.891240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.875 [2024-12-13 03:49:06.891279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.875 qpair failed and we were unable to recover it. 00:38:05.875 [2024-12-13 03:49:06.891419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.875 [2024-12-13 03:49:06.891434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.875 qpair failed and we were unable to recover it. 00:38:05.875 [2024-12-13 03:49:06.891581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.875 [2024-12-13 03:49:06.891595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.875 qpair failed and we were unable to recover it. 00:38:05.875 [2024-12-13 03:49:06.891818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.875 [2024-12-13 03:49:06.891832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.875 qpair failed and we were unable to recover it. 00:38:05.875 [2024-12-13 03:49:06.892029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.875 [2024-12-13 03:49:06.892074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.875 qpair failed and we were unable to recover it. 00:38:05.875 [2024-12-13 03:49:06.892302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.875 [2024-12-13 03:49:06.892343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.875 qpair failed and we were unable to recover it. 00:38:05.875 [2024-12-13 03:49:06.892546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.875 [2024-12-13 03:49:06.892589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.875 qpair failed and we were unable to recover it. 00:38:05.875 [2024-12-13 03:49:06.892847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.875 [2024-12-13 03:49:06.892890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.875 qpair failed and we were unable to recover it. 00:38:05.875 [2024-12-13 03:49:06.893078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.875 [2024-12-13 03:49:06.893092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.875 qpair failed and we were unable to recover it. 00:38:05.875 [2024-12-13 03:49:06.893296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.875 [2024-12-13 03:49:06.893309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.875 qpair failed and we were unable to recover it. 00:38:05.875 [2024-12-13 03:49:06.893512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.875 [2024-12-13 03:49:06.893526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.875 qpair failed and we were unable to recover it. 00:38:05.875 [2024-12-13 03:49:06.893711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.875 [2024-12-13 03:49:06.893726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.875 qpair failed and we were unable to recover it. 00:38:05.875 [2024-12-13 03:49:06.893879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.875 [2024-12-13 03:49:06.893893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.875 qpair failed and we were unable to recover it. 00:38:05.875 [2024-12-13 03:49:06.894137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.875 [2024-12-13 03:49:06.894182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.875 qpair failed and we were unable to recover it. 00:38:05.875 [2024-12-13 03:49:06.894419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.875 [2024-12-13 03:49:06.894460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.875 qpair failed and we were unable to recover it. 00:38:05.875 [2024-12-13 03:49:06.894754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.875 [2024-12-13 03:49:06.894807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.875 qpair failed and we were unable to recover it. 00:38:05.875 [2024-12-13 03:49:06.895090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.875 [2024-12-13 03:49:06.895134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.875 qpair failed and we were unable to recover it. 00:38:05.875 [2024-12-13 03:49:06.895426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.875 [2024-12-13 03:49:06.895445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.875 qpair failed and we were unable to recover it. 00:38:05.875 [2024-12-13 03:49:06.895613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.875 [2024-12-13 03:49:06.895627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.875 qpair failed and we were unable to recover it. 00:38:05.875 [2024-12-13 03:49:06.895852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.875 [2024-12-13 03:49:06.895867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.875 qpair failed and we were unable to recover it. 00:38:05.875 [2024-12-13 03:49:06.895954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.875 [2024-12-13 03:49:06.895968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.875 qpair failed and we were unable to recover it. 00:38:05.875 [2024-12-13 03:49:06.896122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.875 [2024-12-13 03:49:06.896137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.875 qpair failed and we were unable to recover it. 00:38:05.875 [2024-12-13 03:49:06.896254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.875 [2024-12-13 03:49:06.896282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.875 qpair failed and we were unable to recover it. 00:38:05.875 [2024-12-13 03:49:06.896540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.875 [2024-12-13 03:49:06.896585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.875 qpair failed and we were unable to recover it. 00:38:05.875 [2024-12-13 03:49:06.896785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.875 [2024-12-13 03:49:06.896828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.875 qpair failed and we were unable to recover it. 00:38:05.875 [2024-12-13 03:49:06.896985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.875 [2024-12-13 03:49:06.897028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.875 qpair failed and we were unable to recover it. 00:38:05.875 [2024-12-13 03:49:06.897200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.875 [2024-12-13 03:49:06.897215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.875 qpair failed and we were unable to recover it. 00:38:05.875 [2024-12-13 03:49:06.897419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.875 [2024-12-13 03:49:06.897432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.875 qpair failed and we were unable to recover it. 00:38:05.875 [2024-12-13 03:49:06.897580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.876 [2024-12-13 03:49:06.897594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.876 qpair failed and we were unable to recover it. 00:38:05.876 [2024-12-13 03:49:06.897680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.876 [2024-12-13 03:49:06.897695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.876 qpair failed and we were unable to recover it. 00:38:05.876 [2024-12-13 03:49:06.897790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.876 [2024-12-13 03:49:06.897804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.876 qpair failed and we were unable to recover it. 00:38:05.876 [2024-12-13 03:49:06.898038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.876 [2024-12-13 03:49:06.898053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.876 qpair failed and we were unable to recover it. 00:38:05.876 [2024-12-13 03:49:06.898143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.876 [2024-12-13 03:49:06.898156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.876 qpair failed and we were unable to recover it. 00:38:05.876 [2024-12-13 03:49:06.898293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.876 [2024-12-13 03:49:06.898307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.876 qpair failed and we were unable to recover it. 00:38:05.876 [2024-12-13 03:49:06.898387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.876 [2024-12-13 03:49:06.898401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.876 qpair failed and we were unable to recover it. 00:38:05.876 [2024-12-13 03:49:06.898563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.876 [2024-12-13 03:49:06.898577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.876 qpair failed and we were unable to recover it. 00:38:05.876 [2024-12-13 03:49:06.898729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.876 [2024-12-13 03:49:06.898743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.876 qpair failed and we were unable to recover it. 00:38:05.876 [2024-12-13 03:49:06.898891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.876 [2024-12-13 03:49:06.898905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.876 qpair failed and we were unable to recover it. 00:38:05.876 [2024-12-13 03:49:06.899048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.876 [2024-12-13 03:49:06.899095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.876 qpair failed and we were unable to recover it. 00:38:05.876 [2024-12-13 03:49:06.899306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.876 [2024-12-13 03:49:06.899353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.876 qpair failed and we were unable to recover it. 00:38:05.876 [2024-12-13 03:49:06.899565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.876 [2024-12-13 03:49:06.899611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.876 qpair failed and we were unable to recover it. 00:38:05.876 [2024-12-13 03:49:06.899705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.876 [2024-12-13 03:49:06.899720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.876 qpair failed and we were unable to recover it. 00:38:05.876 [2024-12-13 03:49:06.899887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.876 [2024-12-13 03:49:06.899915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.876 qpair failed and we were unable to recover it. 00:38:05.876 [2024-12-13 03:49:06.900029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.876 [2024-12-13 03:49:06.900052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.876 qpair failed and we were unable to recover it. 00:38:05.876 [2024-12-13 03:49:06.900142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.876 [2024-12-13 03:49:06.900163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.876 qpair failed and we were unable to recover it. 00:38:05.876 [2024-12-13 03:49:06.900323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.876 [2024-12-13 03:49:06.900339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.876 qpair failed and we were unable to recover it. 00:38:05.876 [2024-12-13 03:49:06.900448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.876 [2024-12-13 03:49:06.900461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.876 qpair failed and we were unable to recover it. 00:38:05.876 [2024-12-13 03:49:06.900542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.876 [2024-12-13 03:49:06.900556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.876 qpair failed and we were unable to recover it. 00:38:05.876 [2024-12-13 03:49:06.900635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.876 [2024-12-13 03:49:06.900649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.876 qpair failed and we were unable to recover it. 00:38:05.876 [2024-12-13 03:49:06.900799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.876 [2024-12-13 03:49:06.900814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.876 qpair failed and we were unable to recover it. 00:38:05.876 [2024-12-13 03:49:06.900900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.876 [2024-12-13 03:49:06.900913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.876 qpair failed and we were unable to recover it. 00:38:05.876 [2024-12-13 03:49:06.901007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.876 [2024-12-13 03:49:06.901026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.876 qpair failed and we were unable to recover it. 00:38:05.876 [2024-12-13 03:49:06.901197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.876 [2024-12-13 03:49:06.901211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.876 qpair failed and we were unable to recover it. 00:38:05.876 [2024-12-13 03:49:06.901307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.876 [2024-12-13 03:49:06.901319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.876 qpair failed and we were unable to recover it. 00:38:05.876 [2024-12-13 03:49:06.901406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.876 [2024-12-13 03:49:06.901418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.876 qpair failed and we were unable to recover it. 00:38:05.876 [2024-12-13 03:49:06.901493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.876 [2024-12-13 03:49:06.901508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.876 qpair failed and we were unable to recover it. 00:38:05.876 [2024-12-13 03:49:06.901580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.876 [2024-12-13 03:49:06.901593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.876 qpair failed and we were unable to recover it. 00:38:05.876 [2024-12-13 03:49:06.901744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.876 [2024-12-13 03:49:06.901758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.876 qpair failed and we were unable to recover it. 00:38:05.876 [2024-12-13 03:49:06.901840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.876 [2024-12-13 03:49:06.901854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.876 qpair failed and we were unable to recover it. 00:38:05.876 [2024-12-13 03:49:06.901928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.876 [2024-12-13 03:49:06.901941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.876 qpair failed and we were unable to recover it. 00:38:05.876 [2024-12-13 03:49:06.902081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.876 [2024-12-13 03:49:06.902094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.876 qpair failed and we were unable to recover it. 00:38:05.876 [2024-12-13 03:49:06.902184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.876 [2024-12-13 03:49:06.902197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.876 qpair failed and we were unable to recover it. 00:38:05.876 [2024-12-13 03:49:06.902435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.876 [2024-12-13 03:49:06.902448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.876 qpair failed and we were unable to recover it. 00:38:05.876 [2024-12-13 03:49:06.902541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.876 [2024-12-13 03:49:06.902553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.876 qpair failed and we were unable to recover it. 00:38:05.876 [2024-12-13 03:49:06.902629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.876 [2024-12-13 03:49:06.902643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.876 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.902721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.902735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.902868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.902880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.902954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.902967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.903116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.903129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.903211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.903225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.903301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.903313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.903403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.903416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.903499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.903511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.903588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.903602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.903749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.903763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.903831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.903844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.903922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.903936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.904072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.904087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.904171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.904185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.904332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.904346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.904430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.904443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.904528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.904542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.904654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.904682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.904799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.904832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.904945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.904970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.905067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.905082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.905220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.905234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.905307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.905320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.905459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.905472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.905541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.905554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.905690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.905703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.905777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.905790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.905874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.905887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.906023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.906037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.906106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.906119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.906264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.906283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.906461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.906475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.906556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.906568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.906702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.906715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.906796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.906809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.906888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.906910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.907062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.907076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.907215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.907229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.877 qpair failed and we were unable to recover it. 00:38:05.877 [2024-12-13 03:49:06.907438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.877 [2024-12-13 03:49:06.907482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.907678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.907720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.907869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.907911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.908051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.908092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.908312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.908326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.908410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.908424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.908575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.908588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.908658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.908670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.908806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.908820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.909026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.909065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.909313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.909356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.909498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.909540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.909738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.909779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.910076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.910092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.910171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.910186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.910339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.910375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.910516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.910530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.910671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.910685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.910911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.910938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.911090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.911104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.911283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.911323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.911514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.911555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.911694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.911737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.911956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.912000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.912252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.912295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.912556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.912599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.912805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.912848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.913019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.913033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.913195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.913209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.913346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.913360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.913495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.913508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.913592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.913605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.913748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.913765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.913853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.913869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.914011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.914024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.914184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.914198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.914343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.914357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.914587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.914601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.914688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.878 [2024-12-13 03:49:06.914700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.878 qpair failed and we were unable to recover it. 00:38:05.878 [2024-12-13 03:49:06.914936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.914950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.915122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.915135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.915290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.915304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.915388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.915400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.915496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.915508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.915736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.915748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.915842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.915854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.915945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.915959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.916050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.916063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.916292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.916304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.916400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.916411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.916600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.916611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.916694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.916706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.916855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.916867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.917011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.917024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.917109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.917122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.917274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.917286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.917450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.917462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.917695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.917709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.917780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.917793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.917908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.917942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.918258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.918281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.918389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.918414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.918529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.918544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.918689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.918701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.918795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.918807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.918961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.918974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.919077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.919089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.919294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.919306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.919377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.919389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.919537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.919549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.919703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.919716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.919851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.919863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.920003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.920018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.920199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.920211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.920303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.920315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.920562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.920574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.920663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.920675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.879 [2024-12-13 03:49:06.920758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.879 [2024-12-13 03:49:06.920771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.879 qpair failed and we were unable to recover it. 00:38:05.880 [2024-12-13 03:49:06.920871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.880 [2024-12-13 03:49:06.920885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.880 qpair failed and we were unable to recover it. 00:38:05.880 [2024-12-13 03:49:06.920977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.880 [2024-12-13 03:49:06.920990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.880 qpair failed and we were unable to recover it. 00:38:05.880 [2024-12-13 03:49:06.921086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.880 [2024-12-13 03:49:06.921099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.880 qpair failed and we were unable to recover it. 00:38:05.880 [2024-12-13 03:49:06.921238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.880 [2024-12-13 03:49:06.921252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.880 qpair failed and we were unable to recover it. 00:38:05.880 [2024-12-13 03:49:06.921319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.880 [2024-12-13 03:49:06.921340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.880 qpair failed and we were unable to recover it. 00:38:05.880 [2024-12-13 03:49:06.921549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.880 [2024-12-13 03:49:06.921562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.880 qpair failed and we were unable to recover it. 00:38:05.880 [2024-12-13 03:49:06.921782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.880 [2024-12-13 03:49:06.921795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.880 qpair failed and we were unable to recover it. 00:38:05.880 [2024-12-13 03:49:06.921894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.880 [2024-12-13 03:49:06.921907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.880 qpair failed and we were unable to recover it. 00:38:05.880 [2024-12-13 03:49:06.922002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.880 [2024-12-13 03:49:06.922016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.880 qpair failed and we were unable to recover it. 00:38:05.880 [2024-12-13 03:49:06.922158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.880 [2024-12-13 03:49:06.922170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.880 qpair failed and we were unable to recover it. 00:38:05.880 [2024-12-13 03:49:06.922251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.880 [2024-12-13 03:49:06.922264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.880 qpair failed and we were unable to recover it. 00:38:05.880 [2024-12-13 03:49:06.922514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.880 [2024-12-13 03:49:06.922527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.880 qpair failed and we were unable to recover it. 00:38:05.880 [2024-12-13 03:49:06.922669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.880 [2024-12-13 03:49:06.922682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.880 qpair failed and we were unable to recover it. 00:38:05.880 [2024-12-13 03:49:06.922824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.880 [2024-12-13 03:49:06.922837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.880 qpair failed and we were unable to recover it. 00:38:05.880 [2024-12-13 03:49:06.923054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.880 [2024-12-13 03:49:06.923096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.880 qpair failed and we were unable to recover it. 00:38:05.880 [2024-12-13 03:49:06.923246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.880 [2024-12-13 03:49:06.923289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.880 qpair failed and we were unable to recover it. 00:38:05.880 [2024-12-13 03:49:06.923416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.880 [2024-12-13 03:49:06.923457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.880 qpair failed and we were unable to recover it. 00:38:05.880 [2024-12-13 03:49:06.923659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.880 [2024-12-13 03:49:06.923701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.880 qpair failed and we were unable to recover it. 00:38:05.880 [2024-12-13 03:49:06.923958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.880 [2024-12-13 03:49:06.924000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.880 qpair failed and we were unable to recover it. 00:38:05.880 [2024-12-13 03:49:06.924140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.880 [2024-12-13 03:49:06.924181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.880 qpair failed and we were unable to recover it. 00:38:05.880 [2024-12-13 03:49:06.924355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.880 [2024-12-13 03:49:06.924368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.880 qpair failed and we were unable to recover it. 00:38:05.880 [2024-12-13 03:49:06.924638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.880 [2024-12-13 03:49:06.924688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.880 qpair failed and we were unable to recover it. 00:38:05.880 [2024-12-13 03:49:06.924940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.880 [2024-12-13 03:49:06.924989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.880 qpair failed and we were unable to recover it. 00:38:05.880 [2024-12-13 03:49:06.925205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.880 [2024-12-13 03:49:06.925225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.880 qpair failed and we were unable to recover it. 00:38:05.880 [2024-12-13 03:49:06.925344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.880 [2024-12-13 03:49:06.925387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.880 qpair failed and we were unable to recover it. 00:38:05.880 [2024-12-13 03:49:06.925698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.880 [2024-12-13 03:49:06.925740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.880 qpair failed and we were unable to recover it. 00:38:05.880 [2024-12-13 03:49:06.925938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.880 [2024-12-13 03:49:06.925981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.880 qpair failed and we were unable to recover it. 00:38:05.880 [2024-12-13 03:49:06.926142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.880 [2024-12-13 03:49:06.926183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.880 qpair failed and we were unable to recover it. 00:38:05.880 [2024-12-13 03:49:06.926337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.880 [2024-12-13 03:49:06.926387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.880 qpair failed and we were unable to recover it. 00:38:05.880 [2024-12-13 03:49:06.926479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.880 [2024-12-13 03:49:06.926499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.880 qpair failed and we were unable to recover it. 00:38:05.880 [2024-12-13 03:49:06.926656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.880 [2024-12-13 03:49:06.926676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.880 qpair failed and we were unable to recover it. 00:38:05.880 [2024-12-13 03:49:06.926827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.880 [2024-12-13 03:49:06.926846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.880 qpair failed and we were unable to recover it. 00:38:05.880 [2024-12-13 03:49:06.927009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.880 [2024-12-13 03:49:06.927051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.880 qpair failed and we were unable to recover it. 00:38:05.880 [2024-12-13 03:49:06.927317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.927358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.927568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.927618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.927879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.927930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.928139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.928160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.928399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.928440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.928593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.928634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.928852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.928893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.929121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.929162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.929306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.929348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.929589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.929609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.929832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.929852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.929944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.929965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.930142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.930159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.930251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.930291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.930518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.930559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.930781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.930824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.931018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.931061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.931262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.931304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.931506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.931519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.931759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.931801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.932034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.932077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.932320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.932333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.932536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.932549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.932748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.932761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.932896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.932909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.933093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.933135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.933412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.933454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.933723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.933764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.934038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.934088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.934364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.934406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.934688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.934728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.934883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.934934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.935201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.935242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.935500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.935542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.935833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.935875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.936167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.936219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.936491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.936504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.936709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.936722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.881 qpair failed and we were unable to recover it. 00:38:05.881 [2024-12-13 03:49:06.936789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.881 [2024-12-13 03:49:06.936802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.882 qpair failed and we were unable to recover it. 00:38:05.882 [2024-12-13 03:49:06.937004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.882 [2024-12-13 03:49:06.937018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.882 qpair failed and we were unable to recover it. 00:38:05.882 [2024-12-13 03:49:06.937169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.882 [2024-12-13 03:49:06.937182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.882 qpair failed and we were unable to recover it. 00:38:05.882 [2024-12-13 03:49:06.937405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.882 [2024-12-13 03:49:06.937446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.882 qpair failed and we were unable to recover it. 00:38:05.882 [2024-12-13 03:49:06.937747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.882 [2024-12-13 03:49:06.937788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.882 qpair failed and we were unable to recover it. 00:38:05.882 [2024-12-13 03:49:06.937976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.882 [2024-12-13 03:49:06.938019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.882 qpair failed and we were unable to recover it. 00:38:05.882 [2024-12-13 03:49:06.938306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.882 [2024-12-13 03:49:06.938349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.882 qpair failed and we were unable to recover it. 00:38:05.882 [2024-12-13 03:49:06.938604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.882 [2024-12-13 03:49:06.938644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.882 qpair failed and we were unable to recover it. 00:38:05.882 [2024-12-13 03:49:06.938955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.882 [2024-12-13 03:49:06.938999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.882 qpair failed and we were unable to recover it. 00:38:05.882 [2024-12-13 03:49:06.939276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.882 [2024-12-13 03:49:06.939318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.882 qpair failed and we were unable to recover it. 00:38:05.882 [2024-12-13 03:49:06.939597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.882 [2024-12-13 03:49:06.939610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.882 qpair failed and we were unable to recover it. 00:38:05.882 [2024-12-13 03:49:06.939759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.882 [2024-12-13 03:49:06.939771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.882 qpair failed and we were unable to recover it. 00:38:05.882 [2024-12-13 03:49:06.940003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.882 [2024-12-13 03:49:06.940017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.882 qpair failed and we were unable to recover it. 00:38:05.882 [2024-12-13 03:49:06.940238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.882 [2024-12-13 03:49:06.940280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.882 qpair failed and we were unable to recover it. 00:38:05.882 [2024-12-13 03:49:06.940509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.882 [2024-12-13 03:49:06.940550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.882 qpair failed and we were unable to recover it. 00:38:05.882 [2024-12-13 03:49:06.940825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.882 [2024-12-13 03:49:06.940867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.882 qpair failed and we were unable to recover it. 00:38:05.882 [2024-12-13 03:49:06.941083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.882 [2024-12-13 03:49:06.941126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.882 qpair failed and we were unable to recover it. 00:38:05.882 [2024-12-13 03:49:06.941426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.882 [2024-12-13 03:49:06.941468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.882 qpair failed and we were unable to recover it. 00:38:05.882 [2024-12-13 03:49:06.941676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.882 [2024-12-13 03:49:06.941716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.882 qpair failed and we were unable to recover it. 00:38:05.882 [2024-12-13 03:49:06.942000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.882 [2024-12-13 03:49:06.942043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.882 qpair failed and we were unable to recover it. 00:38:05.882 [2024-12-13 03:49:06.942225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.882 [2024-12-13 03:49:06.942239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.882 qpair failed and we were unable to recover it. 00:38:05.882 [2024-12-13 03:49:06.942498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.882 [2024-12-13 03:49:06.942550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.882 qpair failed and we were unable to recover it. 00:38:05.882 [2024-12-13 03:49:06.942757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.882 [2024-12-13 03:49:06.942799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.882 qpair failed and we were unable to recover it. 00:38:05.882 [2024-12-13 03:49:06.943120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.882 [2024-12-13 03:49:06.943163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.882 qpair failed and we were unable to recover it. 00:38:05.882 [2024-12-13 03:49:06.943443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.882 [2024-12-13 03:49:06.943485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.882 qpair failed and we were unable to recover it. 00:38:05.882 [2024-12-13 03:49:06.943790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.882 [2024-12-13 03:49:06.943830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.882 qpair failed and we were unable to recover it. 00:38:05.882 [2024-12-13 03:49:06.944100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.882 [2024-12-13 03:49:06.944143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.882 qpair failed and we were unable to recover it. 00:38:05.882 [2024-12-13 03:49:06.944404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.882 [2024-12-13 03:49:06.944446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.882 qpair failed and we were unable to recover it. 00:38:05.882 [2024-12-13 03:49:06.944727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.882 [2024-12-13 03:49:06.944768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.882 qpair failed and we were unable to recover it. 00:38:05.882 [2024-12-13 03:49:06.945018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.882 [2024-12-13 03:49:06.945061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.882 qpair failed and we were unable to recover it. 00:38:05.882 [2024-12-13 03:49:06.945285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.882 [2024-12-13 03:49:06.945301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.882 qpair failed and we were unable to recover it. 00:38:05.882 [2024-12-13 03:49:06.945500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.882 [2024-12-13 03:49:06.945514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.882 qpair failed and we were unable to recover it. 00:38:05.882 [2024-12-13 03:49:06.945688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.882 [2024-12-13 03:49:06.945702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.882 qpair failed and we were unable to recover it. 00:38:05.882 [2024-12-13 03:49:06.945913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.882 [2024-12-13 03:49:06.945931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.882 qpair failed and we were unable to recover it. 00:38:05.882 [2024-12-13 03:49:06.946194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.882 [2024-12-13 03:49:06.946208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.882 qpair failed and we were unable to recover it. 00:38:05.882 [2024-12-13 03:49:06.946436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.882 [2024-12-13 03:49:06.946449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.882 qpair failed and we were unable to recover it. 00:38:05.882 [2024-12-13 03:49:06.946624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.882 [2024-12-13 03:49:06.946636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.882 qpair failed and we were unable to recover it. 00:38:05.882 [2024-12-13 03:49:06.946781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.882 [2024-12-13 03:49:06.946794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.947015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.947059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.947387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.947442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.947597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.947610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.947830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.947845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.948089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.948132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.948343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.948385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.948584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.948625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.948884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.948938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.949247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.949288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.949476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.949517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.949822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.949863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.950142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.950186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.950461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.950474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.950619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.950633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.950872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.950913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.951130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.951172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.951380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.951415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.951639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.951652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.951808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.951832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.952045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.952060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.952287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.952300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.952539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.952581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.952887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.952936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.953225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.953267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.953415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.953456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.953687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.953728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.953886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.953936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.954205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.954247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.954502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.954515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.954708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.954721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.954899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.954912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.955130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.955144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.955292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.955307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.955544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.955557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.955754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.955767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.955925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.955950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.956165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.956206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.956451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.883 [2024-12-13 03:49:06.956493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.883 qpair failed and we were unable to recover it. 00:38:05.883 [2024-12-13 03:49:06.956796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.884 [2024-12-13 03:49:06.956837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.884 qpair failed and we were unable to recover it. 00:38:05.884 [2024-12-13 03:49:06.957117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.884 [2024-12-13 03:49:06.957161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.884 qpair failed and we were unable to recover it. 00:38:05.884 [2024-12-13 03:49:06.957367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.884 [2024-12-13 03:49:06.957407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.884 qpair failed and we were unable to recover it. 00:38:05.884 [2024-12-13 03:49:06.957616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.884 [2024-12-13 03:49:06.957658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.884 qpair failed and we were unable to recover it. 00:38:05.884 [2024-12-13 03:49:06.957862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.884 [2024-12-13 03:49:06.957903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.884 qpair failed and we were unable to recover it. 00:38:05.884 [2024-12-13 03:49:06.958176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.884 [2024-12-13 03:49:06.958217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.884 qpair failed and we were unable to recover it. 00:38:05.884 [2024-12-13 03:49:06.958424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.884 [2024-12-13 03:49:06.958465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.884 qpair failed and we were unable to recover it. 00:38:05.884 [2024-12-13 03:49:06.958716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.884 [2024-12-13 03:49:06.958729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.884 qpair failed and we were unable to recover it. 00:38:05.884 [2024-12-13 03:49:06.958935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.884 [2024-12-13 03:49:06.958949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.884 qpair failed and we were unable to recover it. 00:38:05.884 [2024-12-13 03:49:06.959101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.884 [2024-12-13 03:49:06.959123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.884 qpair failed and we were unable to recover it. 00:38:05.884 [2024-12-13 03:49:06.959380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.884 [2024-12-13 03:49:06.959422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.884 qpair failed and we were unable to recover it. 00:38:05.884 [2024-12-13 03:49:06.959560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.884 [2024-12-13 03:49:06.959602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.884 qpair failed and we were unable to recover it. 00:38:05.884 [2024-12-13 03:49:06.959889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.884 [2024-12-13 03:49:06.959940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.884 qpair failed and we were unable to recover it. 00:38:05.884 [2024-12-13 03:49:06.960203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.884 [2024-12-13 03:49:06.960217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.884 qpair failed and we were unable to recover it. 00:38:05.884 [2024-12-13 03:49:06.960297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.884 [2024-12-13 03:49:06.960309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.884 qpair failed and we were unable to recover it. 00:38:05.884 [2024-12-13 03:49:06.960460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.884 [2024-12-13 03:49:06.960473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.884 qpair failed and we were unable to recover it. 00:38:05.884 [2024-12-13 03:49:06.960542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.884 [2024-12-13 03:49:06.960554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.884 qpair failed and we were unable to recover it. 00:38:05.884 [2024-12-13 03:49:06.960798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.884 [2024-12-13 03:49:06.960811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.884 qpair failed and we were unable to recover it. 00:38:05.884 [2024-12-13 03:49:06.961038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.884 [2024-12-13 03:49:06.961051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.884 qpair failed and we were unable to recover it. 00:38:05.884 [2024-12-13 03:49:06.961258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.884 [2024-12-13 03:49:06.961271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.884 qpair failed and we were unable to recover it. 00:38:05.884 [2024-12-13 03:49:06.961431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.884 [2024-12-13 03:49:06.961444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.884 qpair failed and we were unable to recover it. 00:38:05.884 [2024-12-13 03:49:06.961611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.884 [2024-12-13 03:49:06.961624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.884 qpair failed and we were unable to recover it. 00:38:05.884 [2024-12-13 03:49:06.961848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.884 [2024-12-13 03:49:06.961890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.884 qpair failed and we were unable to recover it. 00:38:05.884 [2024-12-13 03:49:06.962212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.884 [2024-12-13 03:49:06.962253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.884 qpair failed and we were unable to recover it. 00:38:05.884 [2024-12-13 03:49:06.962608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.884 [2024-12-13 03:49:06.962649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.884 qpair failed and we were unable to recover it. 00:38:05.884 [2024-12-13 03:49:06.962935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.884 [2024-12-13 03:49:06.962978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.884 qpair failed and we were unable to recover it. 00:38:05.884 [2024-12-13 03:49:06.963175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.884 [2024-12-13 03:49:06.963216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.884 qpair failed and we were unable to recover it. 00:38:05.884 [2024-12-13 03:49:06.963472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.884 [2024-12-13 03:49:06.963484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.884 qpair failed and we were unable to recover it. 00:38:05.884 [2024-12-13 03:49:06.963688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.884 [2024-12-13 03:49:06.963701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.884 qpair failed and we were unable to recover it. 00:38:05.884 [2024-12-13 03:49:06.963945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.884 [2024-12-13 03:49:06.963960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.884 qpair failed and we were unable to recover it. 00:38:05.884 [2024-12-13 03:49:06.964207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.884 [2024-12-13 03:49:06.964219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.884 qpair failed and we were unable to recover it. 00:38:05.884 [2024-12-13 03:49:06.964419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.884 [2024-12-13 03:49:06.964432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.884 qpair failed and we were unable to recover it. 00:38:05.884 [2024-12-13 03:49:06.964523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.884 [2024-12-13 03:49:06.964535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.884 qpair failed and we were unable to recover it. 00:38:05.884 [2024-12-13 03:49:06.964743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.884 [2024-12-13 03:49:06.964756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.884 qpair failed and we were unable to recover it. 00:38:05.884 [2024-12-13 03:49:06.964824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.884 [2024-12-13 03:49:06.964839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.884 qpair failed and we were unable to recover it. 00:38:05.884 [2024-12-13 03:49:06.964937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.884 [2024-12-13 03:49:06.964950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.884 qpair failed and we were unable to recover it. 00:38:05.884 [2024-12-13 03:49:06.965048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.885 [2024-12-13 03:49:06.965060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.885 qpair failed and we were unable to recover it. 00:38:05.885 [2024-12-13 03:49:06.965285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.885 [2024-12-13 03:49:06.965298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.885 qpair failed and we were unable to recover it. 00:38:05.885 [2024-12-13 03:49:06.965543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.885 [2024-12-13 03:49:06.965556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.885 qpair failed and we were unable to recover it. 00:38:05.885 [2024-12-13 03:49:06.965719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.885 [2024-12-13 03:49:06.965731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.885 qpair failed and we were unable to recover it. 00:38:05.885 [2024-12-13 03:49:06.965961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.885 [2024-12-13 03:49:06.966003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.885 qpair failed and we were unable to recover it. 00:38:05.885 [2024-12-13 03:49:06.966281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.885 [2024-12-13 03:49:06.966323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.885 qpair failed and we were unable to recover it. 00:38:05.885 [2024-12-13 03:49:06.966600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.885 [2024-12-13 03:49:06.966643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.885 qpair failed and we were unable to recover it. 00:38:05.885 [2024-12-13 03:49:06.966940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.885 [2024-12-13 03:49:06.966984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.885 qpair failed and we were unable to recover it. 00:38:05.885 [2024-12-13 03:49:06.967196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.885 [2024-12-13 03:49:06.967239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.885 qpair failed and we were unable to recover it. 00:38:05.885 [2024-12-13 03:49:06.967550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.885 [2024-12-13 03:49:06.967590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.885 qpair failed and we were unable to recover it. 00:38:05.885 [2024-12-13 03:49:06.967823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.885 [2024-12-13 03:49:06.967865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.885 qpair failed and we were unable to recover it. 00:38:05.885 [2024-12-13 03:49:06.968161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.885 [2024-12-13 03:49:06.968204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.885 qpair failed and we were unable to recover it. 00:38:05.885 [2024-12-13 03:49:06.968505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.885 [2024-12-13 03:49:06.968547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.885 qpair failed and we were unable to recover it. 00:38:05.885 [2024-12-13 03:49:06.968745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.885 [2024-12-13 03:49:06.968825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.885 qpair failed and we were unable to recover it. 00:38:05.885 [2024-12-13 03:49:06.969106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.885 [2024-12-13 03:49:06.969152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.885 qpair failed and we were unable to recover it. 00:38:05.885 [2024-12-13 03:49:06.969458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.885 [2024-12-13 03:49:06.969498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.885 qpair failed and we were unable to recover it. 00:38:05.885 [2024-12-13 03:49:06.969779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.885 [2024-12-13 03:49:06.969822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.885 qpair failed and we were unable to recover it. 00:38:05.885 [2024-12-13 03:49:06.970086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.885 [2024-12-13 03:49:06.970128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.885 qpair failed and we were unable to recover it. 00:38:05.885 [2024-12-13 03:49:06.970276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.885 [2024-12-13 03:49:06.970318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.885 qpair failed and we were unable to recover it. 00:38:05.885 [2024-12-13 03:49:06.970458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.885 [2024-12-13 03:49:06.970472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.885 qpair failed and we were unable to recover it. 00:38:05.885 [2024-12-13 03:49:06.970648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.885 [2024-12-13 03:49:06.970681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.885 qpair failed and we were unable to recover it. 00:38:05.885 [2024-12-13 03:49:06.970962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.885 [2024-12-13 03:49:06.971005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.885 qpair failed and we were unable to recover it. 00:38:05.885 [2024-12-13 03:49:06.971305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.885 [2024-12-13 03:49:06.971347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.885 qpair failed and we were unable to recover it. 00:38:05.885 [2024-12-13 03:49:06.971643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.885 [2024-12-13 03:49:06.971684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.885 qpair failed and we were unable to recover it. 00:38:05.885 [2024-12-13 03:49:06.971942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.885 [2024-12-13 03:49:06.971984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.885 qpair failed and we were unable to recover it. 00:38:05.885 [2024-12-13 03:49:06.972212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.885 [2024-12-13 03:49:06.972255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.885 qpair failed and we were unable to recover it. 00:38:05.885 [2024-12-13 03:49:06.972559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.885 [2024-12-13 03:49:06.972573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.885 qpair failed and we were unable to recover it. 00:38:05.885 [2024-12-13 03:49:06.972821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.885 [2024-12-13 03:49:06.972834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.885 qpair failed and we were unable to recover it. 00:38:05.885 [2024-12-13 03:49:06.973055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.885 [2024-12-13 03:49:06.973068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.885 qpair failed and we were unable to recover it. 00:38:05.885 [2024-12-13 03:49:06.973149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.885 [2024-12-13 03:49:06.973161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.885 qpair failed and we were unable to recover it. 00:38:05.885 [2024-12-13 03:49:06.973305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.885 [2024-12-13 03:49:06.973318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.885 qpair failed and we were unable to recover it. 00:38:05.885 [2024-12-13 03:49:06.973488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.885 [2024-12-13 03:49:06.973501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.885 qpair failed and we were unable to recover it. 00:38:05.885 [2024-12-13 03:49:06.973655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.885 [2024-12-13 03:49:06.973668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.885 qpair failed and we were unable to recover it. 00:38:05.885 [2024-12-13 03:49:06.973834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.885 [2024-12-13 03:49:06.973847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.885 qpair failed and we were unable to recover it. 00:38:05.885 [2024-12-13 03:49:06.974114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.885 [2024-12-13 03:49:06.974128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.885 qpair failed and we were unable to recover it. 00:38:05.885 [2024-12-13 03:49:06.974213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.885 [2024-12-13 03:49:06.974226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.885 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.974432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.974446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.974698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.974716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.974936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.974986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.975199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.975243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.975501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.975542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.975778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.975820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.976123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.976166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.976429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.976442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.976667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.976681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.976754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.976766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.976945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.976959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.977108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.977122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.977258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.977271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.977472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.977486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.977640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.977653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.977880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.977929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.978220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.978262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.978535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.978577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.978847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.978888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.979199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.979242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.979443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.979484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.979774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.979816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.980020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.980062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.980264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.980306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.980549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.980562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.980764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.980777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.980927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.980940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.981184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.981225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.981433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.981474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.981789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.981830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.982130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.982173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.982477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.982517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.982737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.982778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.983029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.983071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.983360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.983401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.983684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.983725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.983961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.984003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.984304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.984344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.984615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.984655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.984880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.984931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.886 [2024-12-13 03:49:06.985242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.886 [2024-12-13 03:49:06.985283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.886 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.985594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.985636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.985836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.985884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.986181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.986222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.986447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.986460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.986713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.986726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.986955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.986970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.987198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.987211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.987425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.987439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.987695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.987708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.987861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.987874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.988028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.988043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.988272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.988285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.988375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.988387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.988647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.988688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.988948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.988990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.989279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.989322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.989607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.989648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.989938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.989982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.990217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.990258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.990483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.990524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.990822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.990835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.991053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.991067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.991318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.991331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.991561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.991604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.991818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.991872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.992174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.992217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.992454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.992467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.992627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.992641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.992868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.992881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.993090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.993143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.993433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.993474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.993663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.993705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.993988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.994030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.994325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.994338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.994552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.994565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.994698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.994711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.994867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.994909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.995245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.995287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.995557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.995570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.995795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.995808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.995907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.995926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.887 qpair failed and we were unable to recover it. 00:38:05.887 [2024-12-13 03:49:06.996136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.887 [2024-12-13 03:49:06.996152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:06.996303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:06.996316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:06.996470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:06.996483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:06.996693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:06.996709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:06.996856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:06.996869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:06.997046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:06.997059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:06.997146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:06.997158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:06.997311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:06.997324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:06.997496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:06.997509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:06.997678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:06.997692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:06.997941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:06.997984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:06.998208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:06.998250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:06.998552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:06.998592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:06.998873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:06.998914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:06.999225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:06.999267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:06.999483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:06.999523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:06.999799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:06.999840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:07.000067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:07.000110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:07.000319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:07.000361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:07.000527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:07.000541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:07.000764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:07.000777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:07.000971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:07.001014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:07.001206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:07.001255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:07.001535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:07.001576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:07.001870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:07.001912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:07.002160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:07.002201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:07.002398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:07.002439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:07.002586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:07.002628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:07.002853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:07.002866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:07.003105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:07.003119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:07.003326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:07.003340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:07.003506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:07.003520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:07.003753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:07.003793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:07.003941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:07.003985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:07.004270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:07.004313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:07.004588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:07.004628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:07.004879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:07.004933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:07.005218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:07.005259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:07.005539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:07.005581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:07.005848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:07.005888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:07.006112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:07.006160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.888 [2024-12-13 03:49:07.006466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.888 [2024-12-13 03:49:07.006506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.888 qpair failed and we were unable to recover it. 00:38:05.889 [2024-12-13 03:49:07.006813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.889 [2024-12-13 03:49:07.006826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.889 qpair failed and we were unable to recover it. 00:38:05.889 [2024-12-13 03:49:07.007083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.889 [2024-12-13 03:49:07.007098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.889 qpair failed and we were unable to recover it. 00:38:05.889 [2024-12-13 03:49:07.007235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.889 [2024-12-13 03:49:07.007248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.889 qpair failed and we were unable to recover it. 00:38:05.889 [2024-12-13 03:49:07.007502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.889 [2024-12-13 03:49:07.007543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.889 qpair failed and we were unable to recover it. 00:38:05.889 [2024-12-13 03:49:07.007864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.889 [2024-12-13 03:49:07.007906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.889 qpair failed and we were unable to recover it. 00:38:05.889 [2024-12-13 03:49:07.008203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.889 [2024-12-13 03:49:07.008266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.889 qpair failed and we were unable to recover it. 00:38:05.889 [2024-12-13 03:49:07.008553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.889 [2024-12-13 03:49:07.008594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.889 qpair failed and we were unable to recover it. 00:38:05.889 [2024-12-13 03:49:07.008884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.889 [2024-12-13 03:49:07.008934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.889 qpair failed and we were unable to recover it. 00:38:05.889 [2024-12-13 03:49:07.009166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.889 [2024-12-13 03:49:07.009216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.889 qpair failed and we were unable to recover it. 00:38:05.889 [2024-12-13 03:49:07.009418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.889 [2024-12-13 03:49:07.009432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.889 qpair failed and we were unable to recover it. 00:38:05.889 [2024-12-13 03:49:07.009581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.889 [2024-12-13 03:49:07.009595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.889 qpair failed and we were unable to recover it. 00:38:05.889 [2024-12-13 03:49:07.009762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.889 [2024-12-13 03:49:07.009801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.889 qpair failed and we were unable to recover it. 00:38:05.889 [2024-12-13 03:49:07.010114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.889 [2024-12-13 03:49:07.010157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.889 qpair failed and we were unable to recover it. 00:38:05.889 [2024-12-13 03:49:07.010451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.889 [2024-12-13 03:49:07.010493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.889 qpair failed and we were unable to recover it. 00:38:05.889 [2024-12-13 03:49:07.010773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.889 [2024-12-13 03:49:07.010814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.889 qpair failed and we were unable to recover it. 00:38:05.889 [2024-12-13 03:49:07.011079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.889 [2024-12-13 03:49:07.011122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.889 qpair failed and we were unable to recover it. 00:38:05.889 [2024-12-13 03:49:07.011401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.889 [2024-12-13 03:49:07.011443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.889 qpair failed and we were unable to recover it. 00:38:05.889 [2024-12-13 03:49:07.011740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.889 [2024-12-13 03:49:07.011783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.889 qpair failed and we were unable to recover it. 00:38:05.889 [2024-12-13 03:49:07.012068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.889 [2024-12-13 03:49:07.012110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.889 qpair failed and we were unable to recover it. 00:38:05.889 [2024-12-13 03:49:07.012353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.889 [2024-12-13 03:49:07.012366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.889 qpair failed and we were unable to recover it. 00:38:05.889 [2024-12-13 03:49:07.012442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.889 [2024-12-13 03:49:07.012454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.889 qpair failed and we were unable to recover it. 00:38:05.889 [2024-12-13 03:49:07.012609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.889 [2024-12-13 03:49:07.012622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.889 qpair failed and we were unable to recover it. 00:38:05.889 [2024-12-13 03:49:07.012847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.889 [2024-12-13 03:49:07.012861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.889 qpair failed and we were unable to recover it. 00:38:05.889 [2024-12-13 03:49:07.013097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.889 [2024-12-13 03:49:07.013110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.889 qpair failed and we were unable to recover it. 00:38:05.889 [2024-12-13 03:49:07.013266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.889 [2024-12-13 03:49:07.013280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.889 qpair failed and we were unable to recover it. 00:38:05.889 [2024-12-13 03:49:07.013492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.889 [2024-12-13 03:49:07.013578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.889 qpair failed and we were unable to recover it. 00:38:05.889 [2024-12-13 03:49:07.013936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.889 [2024-12-13 03:49:07.014020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.889 qpair failed and we were unable to recover it. 00:38:05.889 [2024-12-13 03:49:07.014331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.889 [2024-12-13 03:49:07.014418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:05.889 qpair failed and we were unable to recover it. 00:38:05.889 [2024-12-13 03:49:07.014754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.889 [2024-12-13 03:49:07.014798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.889 qpair failed and we were unable to recover it. 00:38:05.889 [2024-12-13 03:49:07.014968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.889 [2024-12-13 03:49:07.015011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.889 qpair failed and we were unable to recover it. 00:38:05.889 [2024-12-13 03:49:07.015222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.889 [2024-12-13 03:49:07.015263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.015486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.015527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.015786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.015826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.016118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.016161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.016387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.016429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.016660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.016673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.016901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.016915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.017140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.017153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.017414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.017431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.017656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.017669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.017871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.017885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.018042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.018056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.018266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.018307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.018634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.018675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.018892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.018947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.019257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.019298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.019575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.019615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.019885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.019951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.020258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.020301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.020567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.020608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.020884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.020937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.021148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.021189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.021503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.021544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.021827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.021868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.022029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.022071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.022269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.022310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.022513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.022555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.022806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.022820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.022971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.022984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.023199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.023241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.023441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.023484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.023691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.023732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.023955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.023998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.024307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.024347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.024643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.024686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.024880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.024946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.025267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.025312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.025587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.025630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.025841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.025884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.026219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.026274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.890 qpair failed and we were unable to recover it. 00:38:05.890 [2024-12-13 03:49:07.026571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:05.890 [2024-12-13 03:49:07.026616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:05.891 qpair failed and we were unable to recover it. 00:38:06.175 [2024-12-13 03:49:07.026842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.175 [2024-12-13 03:49:07.026905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.175 qpair failed and we were unable to recover it. 00:38:06.175 [2024-12-13 03:49:07.027194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.175 [2024-12-13 03:49:07.027240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.175 qpair failed and we were unable to recover it. 00:38:06.175 [2024-12-13 03:49:07.027471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.175 [2024-12-13 03:49:07.027514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.175 qpair failed and we were unable to recover it. 00:38:06.175 [2024-12-13 03:49:07.027811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.175 [2024-12-13 03:49:07.027853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.175 qpair failed and we were unable to recover it. 00:38:06.175 [2024-12-13 03:49:07.028143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.175 [2024-12-13 03:49:07.028187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.175 qpair failed and we were unable to recover it. 00:38:06.175 [2024-12-13 03:49:07.028434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.175 [2024-12-13 03:49:07.028476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.175 qpair failed and we were unable to recover it. 00:38:06.175 [2024-12-13 03:49:07.028766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.175 [2024-12-13 03:49:07.028807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.175 qpair failed and we were unable to recover it. 00:38:06.175 [2024-12-13 03:49:07.028942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.175 [2024-12-13 03:49:07.028995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.175 qpair failed and we were unable to recover it. 00:38:06.175 [2024-12-13 03:49:07.029280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.175 [2024-12-13 03:49:07.029322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.175 qpair failed and we were unable to recover it. 00:38:06.175 [2024-12-13 03:49:07.029594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.175 [2024-12-13 03:49:07.029636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.175 qpair failed and we were unable to recover it. 00:38:06.175 [2024-12-13 03:49:07.029908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.175 [2024-12-13 03:49:07.029964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.175 qpair failed and we were unable to recover it. 00:38:06.175 [2024-12-13 03:49:07.030111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.175 [2024-12-13 03:49:07.030162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.175 qpair failed and we were unable to recover it. 00:38:06.175 [2024-12-13 03:49:07.030330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.175 [2024-12-13 03:49:07.030351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.175 qpair failed and we were unable to recover it. 00:38:06.175 [2024-12-13 03:49:07.030538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.175 [2024-12-13 03:49:07.030581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.175 qpair failed and we were unable to recover it. 00:38:06.175 [2024-12-13 03:49:07.030865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.175 [2024-12-13 03:49:07.030906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.175 qpair failed and we were unable to recover it. 00:38:06.175 [2024-12-13 03:49:07.031176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.175 [2024-12-13 03:49:07.031217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.175 qpair failed and we were unable to recover it. 00:38:06.175 [2024-12-13 03:49:07.031501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.175 [2024-12-13 03:49:07.031544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.175 qpair failed and we were unable to recover it. 00:38:06.175 [2024-12-13 03:49:07.031741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.175 [2024-12-13 03:49:07.031763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.175 qpair failed and we were unable to recover it. 00:38:06.175 [2024-12-13 03:49:07.031958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.175 [2024-12-13 03:49:07.031975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.175 qpair failed and we were unable to recover it. 00:38:06.175 [2024-12-13 03:49:07.032076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.175 [2024-12-13 03:49:07.032089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.175 qpair failed and we were unable to recover it. 00:38:06.175 [2024-12-13 03:49:07.032238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.175 [2024-12-13 03:49:07.032252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.176 qpair failed and we were unable to recover it. 00:38:06.176 [2024-12-13 03:49:07.032350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.176 [2024-12-13 03:49:07.032363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.176 qpair failed and we were unable to recover it. 00:38:06.176 [2024-12-13 03:49:07.032589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.176 [2024-12-13 03:49:07.032602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.176 qpair failed and we were unable to recover it. 00:38:06.176 [2024-12-13 03:49:07.032698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.176 [2024-12-13 03:49:07.032710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.176 qpair failed and we were unable to recover it. 00:38:06.176 [2024-12-13 03:49:07.032884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.176 [2024-12-13 03:49:07.032898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.176 qpair failed and we were unable to recover it. 00:38:06.176 [2024-12-13 03:49:07.033090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.176 [2024-12-13 03:49:07.033112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.176 qpair failed and we were unable to recover it. 00:38:06.176 [2024-12-13 03:49:07.033363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.176 [2024-12-13 03:49:07.033376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.176 qpair failed and we were unable to recover it. 00:38:06.176 [2024-12-13 03:49:07.033473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.176 [2024-12-13 03:49:07.033486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.176 qpair failed and we were unable to recover it. 00:38:06.176 [2024-12-13 03:49:07.033686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.176 [2024-12-13 03:49:07.033699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.176 qpair failed and we were unable to recover it. 00:38:06.176 [2024-12-13 03:49:07.033913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.176 [2024-12-13 03:49:07.033932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.176 qpair failed and we were unable to recover it. 00:38:06.176 [2024-12-13 03:49:07.034118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.176 [2024-12-13 03:49:07.034131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.176 qpair failed and we were unable to recover it. 00:38:06.176 [2024-12-13 03:49:07.034356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.176 [2024-12-13 03:49:07.034369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.176 qpair failed and we were unable to recover it. 00:38:06.176 [2024-12-13 03:49:07.034516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.176 [2024-12-13 03:49:07.034529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.176 qpair failed and we were unable to recover it. 00:38:06.176 [2024-12-13 03:49:07.034667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.176 [2024-12-13 03:49:07.034680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.176 qpair failed and we were unable to recover it. 00:38:06.176 [2024-12-13 03:49:07.034915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.176 [2024-12-13 03:49:07.034944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.176 qpair failed and we were unable to recover it. 00:38:06.176 [2024-12-13 03:49:07.035146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.176 [2024-12-13 03:49:07.035189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.176 qpair failed and we were unable to recover it. 00:38:06.176 [2024-12-13 03:49:07.035498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.176 [2024-12-13 03:49:07.035539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.176 qpair failed and we were unable to recover it. 00:38:06.176 [2024-12-13 03:49:07.035803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.176 [2024-12-13 03:49:07.035843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.176 qpair failed and we were unable to recover it. 00:38:06.176 [2024-12-13 03:49:07.036153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.176 [2024-12-13 03:49:07.036195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.176 qpair failed and we were unable to recover it. 00:38:06.176 [2024-12-13 03:49:07.036411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.176 [2024-12-13 03:49:07.036453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.176 qpair failed and we were unable to recover it. 00:38:06.176 [2024-12-13 03:49:07.036759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.176 [2024-12-13 03:49:07.036800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.176 qpair failed and we were unable to recover it. 00:38:06.176 [2024-12-13 03:49:07.037015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.176 [2024-12-13 03:49:07.037058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.176 qpair failed and we were unable to recover it. 00:38:06.176 [2024-12-13 03:49:07.037258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.176 [2024-12-13 03:49:07.037298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.176 qpair failed and we were unable to recover it. 00:38:06.176 [2024-12-13 03:49:07.037564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.176 [2024-12-13 03:49:07.037605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.176 qpair failed and we were unable to recover it. 00:38:06.176 [2024-12-13 03:49:07.037884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.176 [2024-12-13 03:49:07.037933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.176 qpair failed and we were unable to recover it. 00:38:06.176 [2024-12-13 03:49:07.038215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.176 [2024-12-13 03:49:07.038256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.176 qpair failed and we were unable to recover it. 00:38:06.176 [2024-12-13 03:49:07.038409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.176 [2024-12-13 03:49:07.038450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.176 qpair failed and we were unable to recover it. 00:38:06.176 [2024-12-13 03:49:07.038729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.176 [2024-12-13 03:49:07.038754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.176 qpair failed and we were unable to recover it. 00:38:06.176 [2024-12-13 03:49:07.038992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.176 [2024-12-13 03:49:07.039014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.176 qpair failed and we were unable to recover it. 00:38:06.176 [2024-12-13 03:49:07.039141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.176 [2024-12-13 03:49:07.039156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.176 qpair failed and we were unable to recover it. 00:38:06.176 [2024-12-13 03:49:07.039415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.176 [2024-12-13 03:49:07.039455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.176 qpair failed and we were unable to recover it. 00:38:06.176 [2024-12-13 03:49:07.039663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.176 [2024-12-13 03:49:07.039706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.176 qpair failed and we were unable to recover it. 00:38:06.176 [2024-12-13 03:49:07.039985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.176 [2024-12-13 03:49:07.040028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.176 qpair failed and we were unable to recover it. 00:38:06.176 [2024-12-13 03:49:07.040272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.176 [2024-12-13 03:49:07.040313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.176 qpair failed and we were unable to recover it. 00:38:06.176 [2024-12-13 03:49:07.040606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.176 [2024-12-13 03:49:07.040648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.176 qpair failed and we were unable to recover it. 00:38:06.176 [2024-12-13 03:49:07.040889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.176 [2024-12-13 03:49:07.040902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.176 qpair failed and we were unable to recover it. 00:38:06.176 [2024-12-13 03:49:07.040998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.041011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.041213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.041226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.041373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.041386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.041623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.041665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.041892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.041944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.042183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.042220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.042364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.042377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.042621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.042662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.042938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.042981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.043173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.043214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.043417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.043430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.043633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.043646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.043818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.043831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.043982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.044028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.044332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.044373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.044605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.044648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.044950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.044992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.045278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.045321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.045669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.045742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.045973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.046002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.046255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.046279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.046390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.046405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.046626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.046639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.046867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.046880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.047158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.047200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.047407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.047448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.047712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.047761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.047963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.048006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.048312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.048352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.048632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.048681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.048929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.048942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.049094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.049110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.049262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.049276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.049499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.049512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.049742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.049785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.049994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.050037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.050291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.050332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.050517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.050530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.177 [2024-12-13 03:49:07.050727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.177 [2024-12-13 03:49:07.050741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.177 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.050967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.050981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.051080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.051094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.051299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.051313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.051465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.051479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.051728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.051769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.051976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.052019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.052311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.052353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.052629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.052670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.052951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.052996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.053150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.053191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.053464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.053477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.053696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.053711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.053934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.053952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.054196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.054210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.054436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.054449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.054591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.054604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.054770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.054783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.054883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.054895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.055100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.055113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.055381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.055430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.055748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.055794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.056078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.056121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.056419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.056462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.056666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.056707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.056967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.057010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.057236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.057277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.057492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.057534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.057762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.057783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.058040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.058055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.058208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.058222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.058402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.058416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.058559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.058573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.058720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.058736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.058998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.059041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.059270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.059312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.059588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.059602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.059827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.059840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.060087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.178 [2024-12-13 03:49:07.060100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.178 qpair failed and we were unable to recover it. 00:38:06.178 [2024-12-13 03:49:07.060299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.179 [2024-12-13 03:49:07.060313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.179 qpair failed and we were unable to recover it. 00:38:06.179 [2024-12-13 03:49:07.060578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.179 [2024-12-13 03:49:07.060591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.179 qpair failed and we were unable to recover it. 00:38:06.179 [2024-12-13 03:49:07.060771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.179 [2024-12-13 03:49:07.060813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.179 qpair failed and we were unable to recover it. 00:38:06.179 [2024-12-13 03:49:07.061129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.179 [2024-12-13 03:49:07.061172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.179 qpair failed and we were unable to recover it. 00:38:06.179 [2024-12-13 03:49:07.061373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.179 [2024-12-13 03:49:07.061386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.179 qpair failed and we were unable to recover it. 00:38:06.179 [2024-12-13 03:49:07.061610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.179 [2024-12-13 03:49:07.061624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.179 qpair failed and we were unable to recover it. 00:38:06.179 [2024-12-13 03:49:07.061787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.179 [2024-12-13 03:49:07.061801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.179 qpair failed and we were unable to recover it. 00:38:06.179 [2024-12-13 03:49:07.061900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.179 [2024-12-13 03:49:07.061914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.179 qpair failed and we were unable to recover it. 00:38:06.179 [2024-12-13 03:49:07.062128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.179 [2024-12-13 03:49:07.062142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.179 qpair failed and we were unable to recover it. 00:38:06.179 [2024-12-13 03:49:07.062365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.179 [2024-12-13 03:49:07.062378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.179 qpair failed and we were unable to recover it. 00:38:06.179 [2024-12-13 03:49:07.062512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.179 [2024-12-13 03:49:07.062525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.179 qpair failed and we were unable to recover it. 00:38:06.179 [2024-12-13 03:49:07.062690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.179 [2024-12-13 03:49:07.062703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.179 qpair failed and we were unable to recover it. 00:38:06.179 [2024-12-13 03:49:07.062926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.179 [2024-12-13 03:49:07.062940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.179 qpair failed and we were unable to recover it. 00:38:06.179 [2024-12-13 03:49:07.063093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.179 [2024-12-13 03:49:07.063106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.179 qpair failed and we were unable to recover it. 00:38:06.179 [2024-12-13 03:49:07.063202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.179 [2024-12-13 03:49:07.063215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.179 qpair failed and we were unable to recover it. 00:38:06.179 [2024-12-13 03:49:07.063367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.179 [2024-12-13 03:49:07.063381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.179 qpair failed and we were unable to recover it. 00:38:06.179 [2024-12-13 03:49:07.063553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.179 [2024-12-13 03:49:07.063595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.179 qpair failed and we were unable to recover it. 00:38:06.179 [2024-12-13 03:49:07.063899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.179 [2024-12-13 03:49:07.063953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.179 qpair failed and we were unable to recover it. 00:38:06.179 [2024-12-13 03:49:07.064261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.179 [2024-12-13 03:49:07.064303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.179 qpair failed and we were unable to recover it. 00:38:06.179 [2024-12-13 03:49:07.064526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.179 [2024-12-13 03:49:07.064567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.179 qpair failed and we were unable to recover it. 00:38:06.179 [2024-12-13 03:49:07.064838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.179 [2024-12-13 03:49:07.064851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.179 qpair failed and we were unable to recover it. 00:38:06.179 [2024-12-13 03:49:07.065122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.179 [2024-12-13 03:49:07.065147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.179 qpair failed and we were unable to recover it. 00:38:06.179 [2024-12-13 03:49:07.065358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.179 [2024-12-13 03:49:07.065378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.179 qpair failed and we were unable to recover it. 00:38:06.179 [2024-12-13 03:49:07.065549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.179 [2024-12-13 03:49:07.065571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.179 qpair failed and we were unable to recover it. 00:38:06.179 [2024-12-13 03:49:07.065767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.179 [2024-12-13 03:49:07.065812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.179 qpair failed and we were unable to recover it. 00:38:06.179 [2024-12-13 03:49:07.066023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.179 [2024-12-13 03:49:07.066066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.179 qpair failed and we were unable to recover it. 00:38:06.179 [2024-12-13 03:49:07.066323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.179 [2024-12-13 03:49:07.066367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.179 qpair failed and we were unable to recover it. 00:38:06.179 [2024-12-13 03:49:07.066569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.179 [2024-12-13 03:49:07.066581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.179 qpair failed and we were unable to recover it. 00:38:06.179 [2024-12-13 03:49:07.066806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.179 [2024-12-13 03:49:07.066819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.179 qpair failed and we were unable to recover it. 00:38:06.179 [2024-12-13 03:49:07.067049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.179 [2024-12-13 03:49:07.067063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.179 qpair failed and we were unable to recover it. 00:38:06.179 [2024-12-13 03:49:07.067162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.179 [2024-12-13 03:49:07.067175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.179 qpair failed and we were unable to recover it. 00:38:06.179 [2024-12-13 03:49:07.067434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.179 [2024-12-13 03:49:07.067447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.179 qpair failed and we were unable to recover it. 00:38:06.179 [2024-12-13 03:49:07.067676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.179 [2024-12-13 03:49:07.067690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.179 qpair failed and we were unable to recover it. 00:38:06.179 [2024-12-13 03:49:07.067937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.179 [2024-12-13 03:49:07.067966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.179 qpair failed and we were unable to recover it. 00:38:06.179 [2024-12-13 03:49:07.068173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.179 [2024-12-13 03:49:07.068187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.179 qpair failed and we were unable to recover it. 00:38:06.179 [2024-12-13 03:49:07.068389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.179 [2024-12-13 03:49:07.068403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.179 qpair failed and we were unable to recover it. 00:38:06.180 [2024-12-13 03:49:07.068641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.180 [2024-12-13 03:49:07.068683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.180 qpair failed and we were unable to recover it. 00:38:06.180 [2024-12-13 03:49:07.068967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.180 [2024-12-13 03:49:07.069010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.180 qpair failed and we were unable to recover it. 00:38:06.180 [2024-12-13 03:49:07.069309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.180 [2024-12-13 03:49:07.069351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.180 qpair failed and we were unable to recover it. 00:38:06.180 [2024-12-13 03:49:07.069646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.180 [2024-12-13 03:49:07.069688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.180 qpair failed and we were unable to recover it. 00:38:06.180 [2024-12-13 03:49:07.069929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.180 [2024-12-13 03:49:07.069942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.180 qpair failed and we were unable to recover it. 00:38:06.180 [2024-12-13 03:49:07.070098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.180 [2024-12-13 03:49:07.070112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.180 qpair failed and we were unable to recover it. 00:38:06.180 [2024-12-13 03:49:07.070192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.180 [2024-12-13 03:49:07.070204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.180 qpair failed and we were unable to recover it. 00:38:06.180 [2024-12-13 03:49:07.070433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.180 [2024-12-13 03:49:07.070446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.180 qpair failed and we were unable to recover it. 00:38:06.180 [2024-12-13 03:49:07.070686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.180 [2024-12-13 03:49:07.070699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.180 qpair failed and we were unable to recover it. 00:38:06.180 [2024-12-13 03:49:07.070848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.180 [2024-12-13 03:49:07.070861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.180 qpair failed and we were unable to recover it. 00:38:06.180 [2024-12-13 03:49:07.071027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.180 [2024-12-13 03:49:07.071040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.180 qpair failed and we were unable to recover it. 00:38:06.180 [2024-12-13 03:49:07.071256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.180 [2024-12-13 03:49:07.071270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.180 qpair failed and we were unable to recover it. 00:38:06.180 [2024-12-13 03:49:07.071503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.180 [2024-12-13 03:49:07.071515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.180 qpair failed and we were unable to recover it. 00:38:06.180 [2024-12-13 03:49:07.071781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.180 [2024-12-13 03:49:07.071794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.180 qpair failed and we were unable to recover it. 00:38:06.180 [2024-12-13 03:49:07.072011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.180 [2024-12-13 03:49:07.072029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.180 qpair failed and we were unable to recover it. 00:38:06.180 [2024-12-13 03:49:07.072250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.180 [2024-12-13 03:49:07.072263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.180 qpair failed and we were unable to recover it. 00:38:06.180 [2024-12-13 03:49:07.072410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.180 [2024-12-13 03:49:07.072423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.180 qpair failed and we were unable to recover it. 00:38:06.180 [2024-12-13 03:49:07.072665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.180 [2024-12-13 03:49:07.072678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.180 qpair failed and we were unable to recover it. 00:38:06.180 [2024-12-13 03:49:07.072887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.180 [2024-12-13 03:49:07.072900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.180 qpair failed and we were unable to recover it. 00:38:06.180 [2024-12-13 03:49:07.073068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.180 [2024-12-13 03:49:07.073082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.180 qpair failed and we were unable to recover it. 00:38:06.180 [2024-12-13 03:49:07.073303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.180 [2024-12-13 03:49:07.073346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.180 qpair failed and we were unable to recover it. 00:38:06.180 [2024-12-13 03:49:07.073569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.180 [2024-12-13 03:49:07.073610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.180 qpair failed and we were unable to recover it. 00:38:06.180 [2024-12-13 03:49:07.073802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.180 [2024-12-13 03:49:07.073844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.180 qpair failed and we were unable to recover it. 00:38:06.180 [2024-12-13 03:49:07.074136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.180 [2024-12-13 03:49:07.074178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.180 qpair failed and we were unable to recover it. 00:38:06.180 [2024-12-13 03:49:07.074438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.180 [2024-12-13 03:49:07.074480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.180 qpair failed and we were unable to recover it. 00:38:06.180 [2024-12-13 03:49:07.074694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.180 [2024-12-13 03:49:07.074708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.180 qpair failed and we were unable to recover it. 00:38:06.180 [2024-12-13 03:49:07.074955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.180 [2024-12-13 03:49:07.074969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.180 qpair failed and we were unable to recover it. 00:38:06.180 [2024-12-13 03:49:07.075139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.180 [2024-12-13 03:49:07.075153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.180 qpair failed and we were unable to recover it. 00:38:06.180 [2024-12-13 03:49:07.075289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.180 [2024-12-13 03:49:07.075302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.180 qpair failed and we were unable to recover it. 00:38:06.181 [2024-12-13 03:49:07.075383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.181 [2024-12-13 03:49:07.075395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.181 qpair failed and we were unable to recover it. 00:38:06.181 [2024-12-13 03:49:07.075633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.181 [2024-12-13 03:49:07.075674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.181 qpair failed and we were unable to recover it. 00:38:06.181 [2024-12-13 03:49:07.075977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.181 [2024-12-13 03:49:07.076019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.181 qpair failed and we were unable to recover it. 00:38:06.181 [2024-12-13 03:49:07.076256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.181 [2024-12-13 03:49:07.076298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.181 qpair failed and we were unable to recover it. 00:38:06.181 [2024-12-13 03:49:07.076573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.181 [2024-12-13 03:49:07.076614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.181 qpair failed and we were unable to recover it. 00:38:06.181 [2024-12-13 03:49:07.076808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.181 [2024-12-13 03:49:07.076821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.181 qpair failed and we were unable to recover it. 00:38:06.181 [2024-12-13 03:49:07.076969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.181 [2024-12-13 03:49:07.076983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.181 qpair failed and we were unable to recover it. 00:38:06.181 [2024-12-13 03:49:07.077202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.181 [2024-12-13 03:49:07.077244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.181 qpair failed and we were unable to recover it. 00:38:06.181 [2024-12-13 03:49:07.077470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.181 [2024-12-13 03:49:07.077510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.181 qpair failed and we were unable to recover it. 00:38:06.181 [2024-12-13 03:49:07.077668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.181 [2024-12-13 03:49:07.077710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.181 qpair failed and we were unable to recover it. 00:38:06.181 [2024-12-13 03:49:07.077974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.181 [2024-12-13 03:49:07.077987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.181 qpair failed and we were unable to recover it. 00:38:06.181 [2024-12-13 03:49:07.078185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.181 [2024-12-13 03:49:07.078198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.181 qpair failed and we were unable to recover it. 00:38:06.181 [2024-12-13 03:49:07.078399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.181 [2024-12-13 03:49:07.078412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.181 qpair failed and we were unable to recover it. 00:38:06.181 [2024-12-13 03:49:07.078655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.181 [2024-12-13 03:49:07.078696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.181 qpair failed and we were unable to recover it. 00:38:06.181 [2024-12-13 03:49:07.078902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.181 [2024-12-13 03:49:07.078956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.181 qpair failed and we were unable to recover it. 00:38:06.181 [2024-12-13 03:49:07.079265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.181 [2024-12-13 03:49:07.079306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.181 qpair failed and we were unable to recover it. 00:38:06.181 [2024-12-13 03:49:07.079605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.181 [2024-12-13 03:49:07.079646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.181 qpair failed and we were unable to recover it. 00:38:06.181 [2024-12-13 03:49:07.079898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.181 [2024-12-13 03:49:07.079911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.181 qpair failed and we were unable to recover it. 00:38:06.181 [2024-12-13 03:49:07.080064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.181 [2024-12-13 03:49:07.080077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.181 qpair failed and we were unable to recover it. 00:38:06.181 [2024-12-13 03:49:07.080310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.181 [2024-12-13 03:49:07.080324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.181 qpair failed and we were unable to recover it. 00:38:06.181 [2024-12-13 03:49:07.080550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.181 [2024-12-13 03:49:07.080592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.181 qpair failed and we were unable to recover it. 00:38:06.181 [2024-12-13 03:49:07.080867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.181 [2024-12-13 03:49:07.080908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.181 qpair failed and we were unable to recover it. 00:38:06.181 [2024-12-13 03:49:07.081217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.181 [2024-12-13 03:49:07.081258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.181 qpair failed and we were unable to recover it. 00:38:06.181 [2024-12-13 03:49:07.081557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.181 [2024-12-13 03:49:07.081598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.181 qpair failed and we were unable to recover it. 00:38:06.181 [2024-12-13 03:49:07.081892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.181 [2024-12-13 03:49:07.081945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.181 qpair failed and we were unable to recover it. 00:38:06.181 [2024-12-13 03:49:07.082171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.181 [2024-12-13 03:49:07.082212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.181 qpair failed and we were unable to recover it. 00:38:06.181 [2024-12-13 03:49:07.082489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.181 [2024-12-13 03:49:07.082533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.181 qpair failed and we were unable to recover it. 00:38:06.181 [2024-12-13 03:49:07.082810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.181 [2024-12-13 03:49:07.082824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.181 qpair failed and we were unable to recover it. 00:38:06.181 [2024-12-13 03:49:07.082989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.181 [2024-12-13 03:49:07.083003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.181 qpair failed and we were unable to recover it. 00:38:06.181 [2024-12-13 03:49:07.083141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.181 [2024-12-13 03:49:07.083154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.181 qpair failed and we were unable to recover it. 00:38:06.181 [2024-12-13 03:49:07.083305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.181 [2024-12-13 03:49:07.083318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.181 qpair failed and we were unable to recover it. 00:38:06.181 [2024-12-13 03:49:07.083404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.181 [2024-12-13 03:49:07.083416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.181 qpair failed and we were unable to recover it. 00:38:06.181 [2024-12-13 03:49:07.083564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.181 [2024-12-13 03:49:07.083577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.181 qpair failed and we were unable to recover it. 00:38:06.181 [2024-12-13 03:49:07.083678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.181 [2024-12-13 03:49:07.083691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.181 qpair failed and we were unable to recover it. 00:38:06.181 [2024-12-13 03:49:07.083892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.181 [2024-12-13 03:49:07.083905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.084064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.084077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.084318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.084368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.084567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.084608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.084881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.084949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.085224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.085266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.085540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.085581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.085854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.085895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.086167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.086209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.086420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.086461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.086769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.086810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.087117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.087161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.087418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.087460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.087748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.087788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.088068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.088115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.088349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.088403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.088628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.088669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.088869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.088910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.089191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.089233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.089530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.089543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.089692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.089705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.089926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.089940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.090085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.090098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.090348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.090389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.090653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.090702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.090963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.090977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.091127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.091141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.091308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.091321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.091514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.091527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.091758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.091772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.092009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.092053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.092333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.092379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.092586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.092598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.092748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.092761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.092969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.093012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.093336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.093378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.093651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.093664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.093874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.093887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.094084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.182 [2024-12-13 03:49:07.094098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.182 qpair failed and we were unable to recover it. 00:38:06.182 [2024-12-13 03:49:07.094199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.094212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.094303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.094315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.094541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.094554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.094755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.094770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.095085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.095128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.095390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.095432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.095710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.095751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.095955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.095998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.096226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.096267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.096563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.096576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.096708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.096722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.096937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.096951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.097151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.097206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.097411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.097452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.097726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.097777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.097925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.097939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.098089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.098103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.098346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.098360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.098583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.098596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.098850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.098863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.099081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.099095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.099250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.099263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.099413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.099427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.099632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.099646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.099874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.099887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.100039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.100053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.100250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.100263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.100491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.100504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.100709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.100722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.100800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.100812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.101039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.101053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.101207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.101221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.101391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.101403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.101579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.101593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.101835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.101849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.101932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.101945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.102048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.102061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.102209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.102223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.183 [2024-12-13 03:49:07.102419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.183 [2024-12-13 03:49:07.102432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.183 qpair failed and we were unable to recover it. 00:38:06.184 [2024-12-13 03:49:07.102649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.184 [2024-12-13 03:49:07.102663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.184 qpair failed and we were unable to recover it. 00:38:06.184 [2024-12-13 03:49:07.102797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.184 [2024-12-13 03:49:07.102815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.184 qpair failed and we were unable to recover it. 00:38:06.184 [2024-12-13 03:49:07.102947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.184 [2024-12-13 03:49:07.102960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.184 qpair failed and we were unable to recover it. 00:38:06.184 [2024-12-13 03:49:07.103102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.184 [2024-12-13 03:49:07.103115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.184 qpair failed and we were unable to recover it. 00:38:06.184 [2024-12-13 03:49:07.103325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.184 [2024-12-13 03:49:07.103373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.184 qpair failed and we were unable to recover it. 00:38:06.184 [2024-12-13 03:49:07.103571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.184 [2024-12-13 03:49:07.103611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.184 qpair failed and we were unable to recover it. 00:38:06.184 [2024-12-13 03:49:07.103811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.184 [2024-12-13 03:49:07.103856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.184 qpair failed and we were unable to recover it. 00:38:06.184 [2024-12-13 03:49:07.103997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.184 [2024-12-13 03:49:07.104011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.184 qpair failed and we were unable to recover it. 00:38:06.184 [2024-12-13 03:49:07.104161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.184 [2024-12-13 03:49:07.104174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.184 qpair failed and we were unable to recover it. 00:38:06.184 [2024-12-13 03:49:07.104339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.184 [2024-12-13 03:49:07.104352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.184 qpair failed and we were unable to recover it. 00:38:06.184 [2024-12-13 03:49:07.104578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.184 [2024-12-13 03:49:07.104591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.184 qpair failed and we were unable to recover it. 00:38:06.184 [2024-12-13 03:49:07.104833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.184 [2024-12-13 03:49:07.104846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.184 qpair failed and we were unable to recover it. 00:38:06.184 [2024-12-13 03:49:07.105015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.184 [2024-12-13 03:49:07.105029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.184 qpair failed and we were unable to recover it. 00:38:06.184 [2024-12-13 03:49:07.105253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.184 [2024-12-13 03:49:07.105266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.184 qpair failed and we were unable to recover it. 00:38:06.184 [2024-12-13 03:49:07.105417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.184 [2024-12-13 03:49:07.105430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.184 qpair failed and we were unable to recover it. 00:38:06.184 [2024-12-13 03:49:07.105641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.184 [2024-12-13 03:49:07.105654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.184 qpair failed and we were unable to recover it. 00:38:06.184 [2024-12-13 03:49:07.105823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.184 [2024-12-13 03:49:07.105836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.184 qpair failed and we were unable to recover it. 00:38:06.184 [2024-12-13 03:49:07.106053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.184 [2024-12-13 03:49:07.106095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.184 qpair failed and we were unable to recover it. 00:38:06.184 [2024-12-13 03:49:07.106356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.184 [2024-12-13 03:49:07.106398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.184 qpair failed and we were unable to recover it. 00:38:06.184 [2024-12-13 03:49:07.106604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.184 [2024-12-13 03:49:07.106616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.184 qpair failed and we were unable to recover it. 00:38:06.184 [2024-12-13 03:49:07.106854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.184 [2024-12-13 03:49:07.106867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.184 qpair failed and we were unable to recover it. 00:38:06.184 [2024-12-13 03:49:07.107049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.184 [2024-12-13 03:49:07.107063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.184 qpair failed and we were unable to recover it. 00:38:06.184 [2024-12-13 03:49:07.107266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.184 [2024-12-13 03:49:07.107308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.184 qpair failed and we were unable to recover it. 00:38:06.184 [2024-12-13 03:49:07.107577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.184 [2024-12-13 03:49:07.107618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.184 qpair failed and we were unable to recover it. 00:38:06.184 [2024-12-13 03:49:07.107894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.184 [2024-12-13 03:49:07.107956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.184 qpair failed and we were unable to recover it. 00:38:06.184 [2024-12-13 03:49:07.108200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.184 [2024-12-13 03:49:07.108243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.184 qpair failed and we were unable to recover it. 00:38:06.184 [2024-12-13 03:49:07.108519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.184 [2024-12-13 03:49:07.108560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.184 qpair failed and we were unable to recover it. 00:38:06.184 [2024-12-13 03:49:07.108846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.184 [2024-12-13 03:49:07.108887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.184 qpair failed and we were unable to recover it. 00:38:06.184 [2024-12-13 03:49:07.109034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.184 [2024-12-13 03:49:07.109048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.184 qpair failed and we were unable to recover it. 00:38:06.184 [2024-12-13 03:49:07.109271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.184 [2024-12-13 03:49:07.109285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.184 qpair failed and we were unable to recover it. 00:38:06.184 [2024-12-13 03:49:07.109517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.184 [2024-12-13 03:49:07.109558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.184 qpair failed and we were unable to recover it. 00:38:06.184 [2024-12-13 03:49:07.109844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.184 [2024-12-13 03:49:07.109887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.184 qpair failed and we were unable to recover it. 00:38:06.184 [2024-12-13 03:49:07.110139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.184 [2024-12-13 03:49:07.110180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.184 qpair failed and we were unable to recover it. 00:38:06.184 [2024-12-13 03:49:07.110388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.185 [2024-12-13 03:49:07.110429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.185 qpair failed and we were unable to recover it. 00:38:06.185 [2024-12-13 03:49:07.110711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.185 [2024-12-13 03:49:07.110751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.185 qpair failed and we were unable to recover it. 00:38:06.185 [2024-12-13 03:49:07.110953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.185 [2024-12-13 03:49:07.110995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.185 qpair failed and we were unable to recover it. 00:38:06.185 [2024-12-13 03:49:07.111152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.185 [2024-12-13 03:49:07.111195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.185 qpair failed and we were unable to recover it. 00:38:06.185 [2024-12-13 03:49:07.111470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.185 [2024-12-13 03:49:07.111511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.185 qpair failed and we were unable to recover it. 00:38:06.185 [2024-12-13 03:49:07.111797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.185 [2024-12-13 03:49:07.111837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.185 qpair failed and we were unable to recover it. 00:38:06.185 [2024-12-13 03:49:07.112096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.185 [2024-12-13 03:49:07.112138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.185 qpair failed and we were unable to recover it. 00:38:06.185 [2024-12-13 03:49:07.112427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.185 [2024-12-13 03:49:07.112470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.185 qpair failed and we were unable to recover it. 00:38:06.185 [2024-12-13 03:49:07.112621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.185 [2024-12-13 03:49:07.112661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.185 qpair failed and we were unable to recover it. 00:38:06.185 [2024-12-13 03:49:07.112957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.185 [2024-12-13 03:49:07.113001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.185 qpair failed and we were unable to recover it. 00:38:06.185 [2024-12-13 03:49:07.113269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.185 [2024-12-13 03:49:07.113311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.185 qpair failed and we were unable to recover it. 00:38:06.185 [2024-12-13 03:49:07.113574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.185 [2024-12-13 03:49:07.113622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.185 qpair failed and we were unable to recover it. 00:38:06.185 [2024-12-13 03:49:07.113814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.185 [2024-12-13 03:49:07.113855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.185 qpair failed and we were unable to recover it. 00:38:06.185 [2024-12-13 03:49:07.114069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.185 [2024-12-13 03:49:07.114111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.185 qpair failed and we were unable to recover it. 00:38:06.185 [2024-12-13 03:49:07.114391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.185 [2024-12-13 03:49:07.114431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.185 qpair failed and we were unable to recover it. 00:38:06.185 [2024-12-13 03:49:07.114697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.185 [2024-12-13 03:49:07.114738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.185 qpair failed and we were unable to recover it. 00:38:06.185 [2024-12-13 03:49:07.114964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.185 [2024-12-13 03:49:07.115008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.185 qpair failed and we were unable to recover it. 00:38:06.185 [2024-12-13 03:49:07.115227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.185 [2024-12-13 03:49:07.115267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.185 qpair failed and we were unable to recover it. 00:38:06.185 [2024-12-13 03:49:07.115476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.185 [2024-12-13 03:49:07.115517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.185 qpair failed and we were unable to recover it. 00:38:06.185 [2024-12-13 03:49:07.115723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.185 [2024-12-13 03:49:07.115736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.185 qpair failed and we were unable to recover it. 00:38:06.185 [2024-12-13 03:49:07.115966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.185 [2024-12-13 03:49:07.116007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.185 qpair failed and we were unable to recover it. 00:38:06.185 [2024-12-13 03:49:07.116267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.185 [2024-12-13 03:49:07.116309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.185 qpair failed and we were unable to recover it. 00:38:06.185 [2024-12-13 03:49:07.116510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.185 [2024-12-13 03:49:07.116552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.185 qpair failed and we were unable to recover it. 00:38:06.185 [2024-12-13 03:49:07.116702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.185 [2024-12-13 03:49:07.116742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.185 qpair failed and we were unable to recover it. 00:38:06.185 [2024-12-13 03:49:07.117043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.185 [2024-12-13 03:49:07.117085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.185 qpair failed and we were unable to recover it. 00:38:06.185 [2024-12-13 03:49:07.117342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.185 [2024-12-13 03:49:07.117384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.185 qpair failed and we were unable to recover it. 00:38:06.185 [2024-12-13 03:49:07.117559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.185 [2024-12-13 03:49:07.117572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.185 qpair failed and we were unable to recover it. 00:38:06.185 [2024-12-13 03:49:07.117775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.185 [2024-12-13 03:49:07.117788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.185 qpair failed and we were unable to recover it. 00:38:06.185 [2024-12-13 03:49:07.117958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.185 [2024-12-13 03:49:07.117971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.185 qpair failed and we were unable to recover it. 00:38:06.185 [2024-12-13 03:49:07.118180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.185 [2024-12-13 03:49:07.118193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.185 qpair failed and we were unable to recover it. 00:38:06.185 [2024-12-13 03:49:07.118354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.185 [2024-12-13 03:49:07.118367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.185 qpair failed and we were unable to recover it. 00:38:06.185 [2024-12-13 03:49:07.118608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.185 [2024-12-13 03:49:07.118622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.185 qpair failed and we were unable to recover it. 00:38:06.185 [2024-12-13 03:49:07.118795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.185 [2024-12-13 03:49:07.118813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.185 qpair failed and we were unable to recover it. 00:38:06.185 [2024-12-13 03:49:07.118982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.185 [2024-12-13 03:49:07.119025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.185 qpair failed and we were unable to recover it. 00:38:06.185 [2024-12-13 03:49:07.119250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.185 [2024-12-13 03:49:07.119291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.185 qpair failed and we were unable to recover it. 00:38:06.185 [2024-12-13 03:49:07.119584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.185 [2024-12-13 03:49:07.119626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.185 qpair failed and we were unable to recover it. 00:38:06.185 [2024-12-13 03:49:07.119900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.185 [2024-12-13 03:49:07.119952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.185 qpair failed and we were unable to recover it. 00:38:06.186 [2024-12-13 03:49:07.120214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.186 [2024-12-13 03:49:07.120255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.186 qpair failed and we were unable to recover it. 00:38:06.186 [2024-12-13 03:49:07.120477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.186 [2024-12-13 03:49:07.120519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.186 qpair failed and we were unable to recover it. 00:38:06.186 [2024-12-13 03:49:07.120826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.186 [2024-12-13 03:49:07.120867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.186 qpair failed and we were unable to recover it. 00:38:06.186 [2024-12-13 03:49:07.120988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.186 [2024-12-13 03:49:07.121002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.186 qpair failed and we were unable to recover it. 00:38:06.186 [2024-12-13 03:49:07.121225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.186 [2024-12-13 03:49:07.121238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.186 qpair failed and we were unable to recover it. 00:38:06.186 [2024-12-13 03:49:07.121478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.186 [2024-12-13 03:49:07.121491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.186 qpair failed and we were unable to recover it. 00:38:06.186 [2024-12-13 03:49:07.121666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.186 [2024-12-13 03:49:07.121679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.186 qpair failed and we were unable to recover it. 00:38:06.186 [2024-12-13 03:49:07.121860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.186 [2024-12-13 03:49:07.121901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.186 qpair failed and we were unable to recover it. 00:38:06.186 [2024-12-13 03:49:07.122187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.186 [2024-12-13 03:49:07.122228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.186 qpair failed and we were unable to recover it. 00:38:06.186 [2024-12-13 03:49:07.122529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.186 [2024-12-13 03:49:07.122572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.186 qpair failed and we were unable to recover it. 00:38:06.186 [2024-12-13 03:49:07.122766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.186 [2024-12-13 03:49:07.122807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.186 qpair failed and we were unable to recover it. 00:38:06.186 [2024-12-13 03:49:07.123041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.186 [2024-12-13 03:49:07.123084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.186 qpair failed and we were unable to recover it. 00:38:06.186 [2024-12-13 03:49:07.123228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.186 [2024-12-13 03:49:07.123270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.186 qpair failed and we were unable to recover it. 00:38:06.186 [2024-12-13 03:49:07.123546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.186 [2024-12-13 03:49:07.123588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.186 qpair failed and we were unable to recover it. 00:38:06.186 [2024-12-13 03:49:07.123857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.186 [2024-12-13 03:49:07.123872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.186 qpair failed and we were unable to recover it. 00:38:06.186 [2024-12-13 03:49:07.124097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.186 [2024-12-13 03:49:07.124111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.186 qpair failed and we were unable to recover it. 00:38:06.186 [2024-12-13 03:49:07.124331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.186 [2024-12-13 03:49:07.124345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.186 qpair failed and we were unable to recover it. 00:38:06.186 [2024-12-13 03:49:07.124559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.186 [2024-12-13 03:49:07.124600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.186 qpair failed and we were unable to recover it. 00:38:06.186 [2024-12-13 03:49:07.124878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.186 [2024-12-13 03:49:07.124940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.186 qpair failed and we were unable to recover it. 00:38:06.186 [2024-12-13 03:49:07.125224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.186 [2024-12-13 03:49:07.125265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.186 qpair failed and we were unable to recover it. 00:38:06.186 [2024-12-13 03:49:07.125501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.186 [2024-12-13 03:49:07.125542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.186 qpair failed and we were unable to recover it. 00:38:06.186 [2024-12-13 03:49:07.125823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.186 [2024-12-13 03:49:07.125864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.186 qpair failed and we were unable to recover it. 00:38:06.186 [2024-12-13 03:49:07.126100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.186 [2024-12-13 03:49:07.126142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.186 qpair failed and we were unable to recover it. 00:38:06.186 [2024-12-13 03:49:07.126398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.186 [2024-12-13 03:49:07.126439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.186 qpair failed and we were unable to recover it. 00:38:06.186 [2024-12-13 03:49:07.126721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.186 [2024-12-13 03:49:07.126762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.186 qpair failed and we were unable to recover it. 00:38:06.186 [2024-12-13 03:49:07.127016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.186 [2024-12-13 03:49:07.127059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.186 qpair failed and we were unable to recover it. 00:38:06.186 [2024-12-13 03:49:07.127262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.186 [2024-12-13 03:49:07.127303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.186 qpair failed and we were unable to recover it. 00:38:06.186 [2024-12-13 03:49:07.127501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.186 [2024-12-13 03:49:07.127543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.186 qpair failed and we were unable to recover it. 00:38:06.186 [2024-12-13 03:49:07.127756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.186 [2024-12-13 03:49:07.127797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.186 qpair failed and we were unable to recover it. 00:38:06.186 [2024-12-13 03:49:07.127984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.186 [2024-12-13 03:49:07.127997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.186 qpair failed and we were unable to recover it. 00:38:06.186 [2024-12-13 03:49:07.128228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.186 [2024-12-13 03:49:07.128270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.186 qpair failed and we were unable to recover it. 00:38:06.186 [2024-12-13 03:49:07.128572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.186 [2024-12-13 03:49:07.128613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.186 qpair failed and we were unable to recover it. 00:38:06.186 [2024-12-13 03:49:07.128812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.186 [2024-12-13 03:49:07.128825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.186 qpair failed and we were unable to recover it. 00:38:06.186 [2024-12-13 03:49:07.129047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.186 [2024-12-13 03:49:07.129061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.186 qpair failed and we were unable to recover it. 00:38:06.186 [2024-12-13 03:49:07.129225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.186 [2024-12-13 03:49:07.129238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.186 qpair failed and we were unable to recover it. 00:38:06.186 [2024-12-13 03:49:07.129474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.186 [2024-12-13 03:49:07.129514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.186 qpair failed and we were unable to recover it. 00:38:06.186 [2024-12-13 03:49:07.129708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.129750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.129985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.130028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.130242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.130283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.130541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.130583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.130810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.130823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.131068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.131082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.131307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.131320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.131520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.131533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.131765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.131806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.131944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.131986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.132191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.132233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.132446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.132487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.132696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.132738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.133013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.133027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.133178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.133191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.133418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.133430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.133649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.133691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.133880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.133934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.134197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.134246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.134526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.134567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.134811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.134835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.134988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.135002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.135257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.135300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.135514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.135568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.135760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.135800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.136055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.136097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.136369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.136411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.136580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.136620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.136915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.136985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.137193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.137234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.137535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.137577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.137839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.137879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.138152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.138197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.138468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.138509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.138810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.138850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.139138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.139182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.139488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.139543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.139709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.139722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.139924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.187 [2024-12-13 03:49:07.139938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.187 qpair failed and we were unable to recover it. 00:38:06.187 [2024-12-13 03:49:07.140073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.140100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.140379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.140419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.140688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.140731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.141055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.141098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.141413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.141456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.141604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.141646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.141954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.141998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.142272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.142313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.142517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.142557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.142823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.142862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.142974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.142987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.143221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.143234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.143370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.143383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.143660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.143702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.143930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.143973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.144166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.144209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.144428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.144471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.144695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.144736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.145019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.145033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.145260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.145278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.145485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.145498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.145670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.145683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.145907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.145966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.146183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.146225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.146377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.146419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.146700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.146742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.146970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.146986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.147066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.147079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.147248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.147262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.147414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.147428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.147618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.147631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.147765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.147778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.147946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.147960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.148041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.148054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.148222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.148247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.148460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.148474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.148743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.148758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.148896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.148909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.188 [2024-12-13 03:49:07.148984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.188 [2024-12-13 03:49:07.148996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.188 qpair failed and we were unable to recover it. 00:38:06.189 [2024-12-13 03:49:07.149177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.189 [2024-12-13 03:49:07.149190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.189 qpair failed and we were unable to recover it. 00:38:06.189 [2024-12-13 03:49:07.149367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.189 [2024-12-13 03:49:07.149380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.189 qpair failed and we were unable to recover it. 00:38:06.189 [2024-12-13 03:49:07.149543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.189 [2024-12-13 03:49:07.149557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.189 qpair failed and we were unable to recover it. 00:38:06.189 [2024-12-13 03:49:07.149712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.189 [2024-12-13 03:49:07.149726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.189 qpair failed and we were unable to recover it. 00:38:06.189 [2024-12-13 03:49:07.149944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.189 [2024-12-13 03:49:07.149958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.189 qpair failed and we were unable to recover it. 00:38:06.189 [2024-12-13 03:49:07.150154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.189 [2024-12-13 03:49:07.150208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.189 qpair failed and we were unable to recover it. 00:38:06.189 [2024-12-13 03:49:07.150470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.189 [2024-12-13 03:49:07.150513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.189 qpair failed and we were unable to recover it. 00:38:06.189 [2024-12-13 03:49:07.150810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.189 [2024-12-13 03:49:07.150852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.189 qpair failed and we were unable to recover it. 00:38:06.189 [2024-12-13 03:49:07.151082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.189 [2024-12-13 03:49:07.151127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.189 qpair failed and we were unable to recover it. 00:38:06.189 [2024-12-13 03:49:07.151435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.189 [2024-12-13 03:49:07.151500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.189 qpair failed and we were unable to recover it. 00:38:06.189 [2024-12-13 03:49:07.151780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.189 [2024-12-13 03:49:07.151821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.189 qpair failed and we were unable to recover it. 00:38:06.189 [2024-12-13 03:49:07.152022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.189 [2024-12-13 03:49:07.152037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.189 qpair failed and we were unable to recover it. 00:38:06.189 [2024-12-13 03:49:07.152173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.189 [2024-12-13 03:49:07.152186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.189 qpair failed and we were unable to recover it. 00:38:06.189 [2024-12-13 03:49:07.152412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.189 [2024-12-13 03:49:07.152427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.189 qpair failed and we were unable to recover it. 00:38:06.189 [2024-12-13 03:49:07.152507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.189 [2024-12-13 03:49:07.152519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.189 qpair failed and we were unable to recover it. 00:38:06.189 [2024-12-13 03:49:07.152671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.189 [2024-12-13 03:49:07.152685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.189 qpair failed and we were unable to recover it. 00:38:06.189 [2024-12-13 03:49:07.152767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.189 [2024-12-13 03:49:07.152780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.189 qpair failed and we were unable to recover it. 00:38:06.189 [2024-12-13 03:49:07.152952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.189 [2024-12-13 03:49:07.152968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.189 qpair failed and we were unable to recover it. 00:38:06.189 [2024-12-13 03:49:07.153121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.189 [2024-12-13 03:49:07.153135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.189 qpair failed and we were unable to recover it. 00:38:06.189 [2024-12-13 03:49:07.153415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.189 [2024-12-13 03:49:07.153475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.189 qpair failed and we were unable to recover it. 00:38:06.189 [2024-12-13 03:49:07.153777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.189 [2024-12-13 03:49:07.153826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.189 qpair failed and we were unable to recover it. 00:38:06.189 [2024-12-13 03:49:07.154123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.189 [2024-12-13 03:49:07.154138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.189 qpair failed and we were unable to recover it. 00:38:06.189 [2024-12-13 03:49:07.154232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.189 [2024-12-13 03:49:07.154245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.189 qpair failed and we were unable to recover it. 00:38:06.189 [2024-12-13 03:49:07.154350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.189 [2024-12-13 03:49:07.154362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.189 qpair failed and we were unable to recover it. 00:38:06.189 [2024-12-13 03:49:07.154548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.189 [2024-12-13 03:49:07.154563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.189 qpair failed and we were unable to recover it. 00:38:06.189 [2024-12-13 03:49:07.154667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.189 [2024-12-13 03:49:07.154679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.189 qpair failed and we were unable to recover it. 00:38:06.189 [2024-12-13 03:49:07.154834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.189 [2024-12-13 03:49:07.154847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.189 qpair failed and we were unable to recover it. 00:38:06.189 [2024-12-13 03:49:07.154986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.189 [2024-12-13 03:49:07.155000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.189 qpair failed and we were unable to recover it. 00:38:06.189 [2024-12-13 03:49:07.155156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.189 [2024-12-13 03:49:07.155169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.189 qpair failed and we were unable to recover it. 00:38:06.189 [2024-12-13 03:49:07.155396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.189 [2024-12-13 03:49:07.155411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.189 qpair failed and we were unable to recover it. 00:38:06.189 [2024-12-13 03:49:07.155638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.189 [2024-12-13 03:49:07.155681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.189 qpair failed and we were unable to recover it. 00:38:06.189 [2024-12-13 03:49:07.155914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.189 [2024-12-13 03:49:07.155982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.189 qpair failed and we were unable to recover it. 00:38:06.189 [2024-12-13 03:49:07.156186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.189 [2024-12-13 03:49:07.156227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.189 qpair failed and we were unable to recover it. 00:38:06.190 [2024-12-13 03:49:07.156426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.190 [2024-12-13 03:49:07.156471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.190 qpair failed and we were unable to recover it. 00:38:06.190 [2024-12-13 03:49:07.156763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.190 [2024-12-13 03:49:07.156812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.190 qpair failed and we were unable to recover it. 00:38:06.190 [2024-12-13 03:49:07.157018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.190 [2024-12-13 03:49:07.157032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.190 qpair failed and we were unable to recover it. 00:38:06.190 [2024-12-13 03:49:07.157276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.190 [2024-12-13 03:49:07.157291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.190 qpair failed and we were unable to recover it. 00:38:06.190 [2024-12-13 03:49:07.157443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.190 [2024-12-13 03:49:07.157456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.190 qpair failed and we were unable to recover it. 00:38:06.190 [2024-12-13 03:49:07.157712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.190 [2024-12-13 03:49:07.157754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.190 qpair failed and we were unable to recover it. 00:38:06.190 [2024-12-13 03:49:07.157969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.190 [2024-12-13 03:49:07.158015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.190 qpair failed and we were unable to recover it. 00:38:06.190 [2024-12-13 03:49:07.158236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.190 [2024-12-13 03:49:07.158277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.190 qpair failed and we were unable to recover it. 00:38:06.190 [2024-12-13 03:49:07.158536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.190 [2024-12-13 03:49:07.158578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.190 qpair failed and we were unable to recover it. 00:38:06.190 [2024-12-13 03:49:07.158866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.190 [2024-12-13 03:49:07.158912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.190 qpair failed and we were unable to recover it. 00:38:06.190 [2024-12-13 03:49:07.159204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.190 [2024-12-13 03:49:07.159247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.190 qpair failed and we were unable to recover it. 00:38:06.190 [2024-12-13 03:49:07.159533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.190 [2024-12-13 03:49:07.159576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.190 qpair failed and we were unable to recover it. 00:38:06.190 [2024-12-13 03:49:07.159781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.190 [2024-12-13 03:49:07.159830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.190 qpair failed and we were unable to recover it. 00:38:06.190 [2024-12-13 03:49:07.160037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.190 [2024-12-13 03:49:07.160080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.190 qpair failed and we were unable to recover it. 00:38:06.190 [2024-12-13 03:49:07.160360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.190 [2024-12-13 03:49:07.160445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.190 qpair failed and we were unable to recover it. 00:38:06.190 [2024-12-13 03:49:07.160807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.190 [2024-12-13 03:49:07.160894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.190 qpair failed and we were unable to recover it. 00:38:06.190 [2024-12-13 03:49:07.161260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.190 [2024-12-13 03:49:07.161310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.190 qpair failed and we were unable to recover it. 00:38:06.190 [2024-12-13 03:49:07.161514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.190 [2024-12-13 03:49:07.161612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.190 qpair failed and we were unable to recover it. 00:38:06.190 [2024-12-13 03:49:07.161903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.190 [2024-12-13 03:49:07.161962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.190 qpair failed and we were unable to recover it. 00:38:06.190 [2024-12-13 03:49:07.162127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.190 [2024-12-13 03:49:07.162169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.190 qpair failed and we were unable to recover it. 00:38:06.190 [2024-12-13 03:49:07.162417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.190 [2024-12-13 03:49:07.162460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.190 qpair failed and we were unable to recover it. 00:38:06.190 [2024-12-13 03:49:07.162664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.190 [2024-12-13 03:49:07.162685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.190 qpair failed and we were unable to recover it. 00:38:06.190 [2024-12-13 03:49:07.162847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.190 [2024-12-13 03:49:07.162868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.190 qpair failed and we were unable to recover it. 00:38:06.190 [2024-12-13 03:49:07.163131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.190 [2024-12-13 03:49:07.163175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.190 qpair failed and we were unable to recover it. 00:38:06.190 [2024-12-13 03:49:07.163404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.190 [2024-12-13 03:49:07.163447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.190 qpair failed and we were unable to recover it. 00:38:06.190 [2024-12-13 03:49:07.163669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.190 [2024-12-13 03:49:07.163711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.190 qpair failed and we were unable to recover it. 00:38:06.190 [2024-12-13 03:49:07.163971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.190 [2024-12-13 03:49:07.164013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.190 qpair failed and we were unable to recover it. 00:38:06.190 [2024-12-13 03:49:07.164296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.190 [2024-12-13 03:49:07.164346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.190 qpair failed and we were unable to recover it. 00:38:06.190 [2024-12-13 03:49:07.164628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.190 [2024-12-13 03:49:07.164671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.190 qpair failed and we were unable to recover it. 00:38:06.190 [2024-12-13 03:49:07.164910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.190 [2024-12-13 03:49:07.164937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.190 qpair failed and we were unable to recover it. 00:38:06.190 [2024-12-13 03:49:07.165104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.190 [2024-12-13 03:49:07.165125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.190 qpair failed and we were unable to recover it. 00:38:06.190 [2024-12-13 03:49:07.165312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.190 [2024-12-13 03:49:07.165353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.190 qpair failed and we were unable to recover it. 00:38:06.190 [2024-12-13 03:49:07.165534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.190 [2024-12-13 03:49:07.165578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.190 qpair failed and we were unable to recover it. 00:38:06.190 [2024-12-13 03:49:07.165796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.190 [2024-12-13 03:49:07.165845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.190 qpair failed and we were unable to recover it. 00:38:06.190 [2024-12-13 03:49:07.166108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.190 [2024-12-13 03:49:07.166130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.190 qpair failed and we were unable to recover it. 00:38:06.190 [2024-12-13 03:49:07.166320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.190 [2024-12-13 03:49:07.166339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.190 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.166539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.166553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.166774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.166788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.166854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.166867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.167029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.167043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.167246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.167259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.167339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.167353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.167572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.167585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.167783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.167797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.167962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.167977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.168071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.168085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.168227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.168240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.168449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.168462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.168642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.168656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.168818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.168831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.169016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.169062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.169307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.169351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.169633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.169680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.169962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.170009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.170194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.170247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.170472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.170516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.170819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.170864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.171137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.171152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.171323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.171337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.171598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.171640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.171808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.171851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.172046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.172091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.172351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.172395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.172634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.172681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.172886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.172913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.173137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.173180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.173390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.173433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.173694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.173744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.173975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.174020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.174243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.174267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.174472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.174486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.174713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.174728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.174885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.174898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.175132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.175145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.191 [2024-12-13 03:49:07.175302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.191 [2024-12-13 03:49:07.175316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.191 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.175401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.175414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.175571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.175585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.175722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.175737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.175928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.175943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.176055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.176068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.176243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.176257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.176407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.176422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.176508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.176522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.176683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.176697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.176848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.176862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.177118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.177166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.177459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.177505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.177763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.177776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.177972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.177992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.178170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.178184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.178290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.178316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.178431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.178445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.178651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.178665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.178804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.178817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.179100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.179186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.179461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.179547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.179819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.179867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.180177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.180222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.180434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.180476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.180687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.180738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.181014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.181058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.181235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.181278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.181547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.181588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.181808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.181851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.182106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.182126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.182294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.182315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.182589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.182638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.182956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.182987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.183236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.183259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.183412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.183434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.183671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.183691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.183863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.183884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.184065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.184087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.192 [2024-12-13 03:49:07.184256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.192 [2024-12-13 03:49:07.184272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.192 qpair failed and we were unable to recover it. 00:38:06.193 [2024-12-13 03:49:07.184352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.193 [2024-12-13 03:49:07.184365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.193 qpair failed and we were unable to recover it. 00:38:06.193 [2024-12-13 03:49:07.184449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.193 [2024-12-13 03:49:07.184461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.193 qpair failed and we were unable to recover it. 00:38:06.193 [2024-12-13 03:49:07.184735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.193 [2024-12-13 03:49:07.184749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.193 qpair failed and we were unable to recover it. 00:38:06.193 [2024-12-13 03:49:07.184892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.193 [2024-12-13 03:49:07.184906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.193 qpair failed and we were unable to recover it. 00:38:06.193 [2024-12-13 03:49:07.185007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.193 [2024-12-13 03:49:07.185020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.193 qpair failed and we were unable to recover it. 00:38:06.193 [2024-12-13 03:49:07.185086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.193 [2024-12-13 03:49:07.185099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.193 qpair failed and we were unable to recover it. 00:38:06.193 [2024-12-13 03:49:07.185271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.193 [2024-12-13 03:49:07.185286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.193 qpair failed and we were unable to recover it. 00:38:06.193 [2024-12-13 03:49:07.185384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.193 [2024-12-13 03:49:07.185397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.193 qpair failed and we were unable to recover it. 00:38:06.193 [2024-12-13 03:49:07.185490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.193 [2024-12-13 03:49:07.185503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.193 qpair failed and we were unable to recover it. 00:38:06.193 [2024-12-13 03:49:07.185657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.193 [2024-12-13 03:49:07.185706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.193 qpair failed and we were unable to recover it. 00:38:06.193 [2024-12-13 03:49:07.185983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.193 [2024-12-13 03:49:07.186047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.193 qpair failed and we were unable to recover it. 00:38:06.193 [2024-12-13 03:49:07.186201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.193 [2024-12-13 03:49:07.186243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.193 qpair failed and we were unable to recover it. 00:38:06.193 [2024-12-13 03:49:07.186396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.193 [2024-12-13 03:49:07.186456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.193 qpair failed and we were unable to recover it. 00:38:06.193 [2024-12-13 03:49:07.186710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.193 [2024-12-13 03:49:07.186753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.193 qpair failed and we were unable to recover it. 00:38:06.193 [2024-12-13 03:49:07.186964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.193 [2024-12-13 03:49:07.186988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.193 qpair failed and we were unable to recover it. 00:38:06.193 [2024-12-13 03:49:07.187134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.193 [2024-12-13 03:49:07.187162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.193 qpair failed and we were unable to recover it. 00:38:06.193 [2024-12-13 03:49:07.187286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.193 [2024-12-13 03:49:07.187309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.193 qpair failed and we were unable to recover it. 00:38:06.193 [2024-12-13 03:49:07.187491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.193 [2024-12-13 03:49:07.187507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.193 qpair failed and we were unable to recover it. 00:38:06.193 [2024-12-13 03:49:07.187586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.193 [2024-12-13 03:49:07.187598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.193 qpair failed and we were unable to recover it. 00:38:06.193 [2024-12-13 03:49:07.187745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.193 [2024-12-13 03:49:07.187761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.193 qpair failed and we were unable to recover it. 00:38:06.193 [2024-12-13 03:49:07.187962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.193 [2024-12-13 03:49:07.187986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.193 qpair failed and we were unable to recover it. 00:38:06.193 [2024-12-13 03:49:07.188154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.193 [2024-12-13 03:49:07.188175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.193 qpair failed and we were unable to recover it. 00:38:06.193 [2024-12-13 03:49:07.188343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.193 [2024-12-13 03:49:07.188364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.193 qpair failed and we were unable to recover it. 00:38:06.193 [2024-12-13 03:49:07.188592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.193 [2024-12-13 03:49:07.188634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.193 qpair failed and we were unable to recover it. 00:38:06.193 [2024-12-13 03:49:07.188951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.193 [2024-12-13 03:49:07.188998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.193 qpair failed and we were unable to recover it. 00:38:06.193 [2024-12-13 03:49:07.189253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.193 [2024-12-13 03:49:07.189282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.193 qpair failed and we were unable to recover it. 00:38:06.193 [2024-12-13 03:49:07.189396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.193 [2024-12-13 03:49:07.189416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.193 qpair failed and we were unable to recover it. 00:38:06.193 [2024-12-13 03:49:07.189660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.193 [2024-12-13 03:49:07.189702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.193 qpair failed and we were unable to recover it. 00:38:06.193 [2024-12-13 03:49:07.190001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.193 [2024-12-13 03:49:07.190045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.193 qpair failed and we were unable to recover it. 00:38:06.193 [2024-12-13 03:49:07.190307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.193 [2024-12-13 03:49:07.190350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.193 qpair failed and we were unable to recover it. 00:38:06.193 [2024-12-13 03:49:07.190644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.193 [2024-12-13 03:49:07.190684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.193 qpair failed and we were unable to recover it. 00:38:06.193 [2024-12-13 03:49:07.190952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.193 [2024-12-13 03:49:07.190995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.193 qpair failed and we were unable to recover it. 00:38:06.193 [2024-12-13 03:49:07.191161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.193 [2024-12-13 03:49:07.191205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.193 qpair failed and we were unable to recover it. 00:38:06.193 [2024-12-13 03:49:07.191357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.193 [2024-12-13 03:49:07.191399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.193 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.191628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.191669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.194 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.191924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.191946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.194 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.192046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.192069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.194 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.192180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.192200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.194 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.192363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.192382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.194 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.192463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.192476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.194 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.192737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.192750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.194 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.192831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.192844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.194 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.192982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.192996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.194 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.193155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.193170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.194 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.193347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.193400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.194 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.193764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.193814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.194 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.194069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.194084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.194 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.194256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.194270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.194 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.194429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.194448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.194 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.194650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.194665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.194 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.194829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.194843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.194 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.195060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.195075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.194 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.195218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.195234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.194 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.195314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.195333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.194 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.195414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.195438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.194 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.195607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.195621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.194 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.195757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.195770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.194 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.195961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.195975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.194 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.196145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.196159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.194 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.196314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.196328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.194 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.196484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.196501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.194 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.196713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.196727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.194 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.196873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.196886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.194 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.197089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.197103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.194 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.197190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.197203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.194 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.197287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.197299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.194 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.197474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.197488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.194 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.197648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.197661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.194 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.197815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.197828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.194 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.197993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.198027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.194 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.198238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.198281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.194 qpair failed and we were unable to recover it. 00:38:06.194 [2024-12-13 03:49:07.198522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.194 [2024-12-13 03:49:07.198565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.195 [2024-12-13 03:49:07.198897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.195 [2024-12-13 03:49:07.198911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.195 [2024-12-13 03:49:07.199168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.195 [2024-12-13 03:49:07.199183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.195 [2024-12-13 03:49:07.199387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.195 [2024-12-13 03:49:07.199401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.195 [2024-12-13 03:49:07.199559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.195 [2024-12-13 03:49:07.199572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.195 [2024-12-13 03:49:07.199718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.195 [2024-12-13 03:49:07.199731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.195 [2024-12-13 03:49:07.199975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.195 [2024-12-13 03:49:07.199990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.195 [2024-12-13 03:49:07.200133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.195 [2024-12-13 03:49:07.200147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.195 [2024-12-13 03:49:07.200310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.195 [2024-12-13 03:49:07.200324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.195 [2024-12-13 03:49:07.200414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.195 [2024-12-13 03:49:07.200427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.195 [2024-12-13 03:49:07.200578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.195 [2024-12-13 03:49:07.200591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.195 [2024-12-13 03:49:07.200810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.195 [2024-12-13 03:49:07.200824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.195 [2024-12-13 03:49:07.200980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.195 [2024-12-13 03:49:07.200996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.195 [2024-12-13 03:49:07.201158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.195 [2024-12-13 03:49:07.201172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.195 [2024-12-13 03:49:07.201325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.195 [2024-12-13 03:49:07.201339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.195 [2024-12-13 03:49:07.201436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.195 [2024-12-13 03:49:07.201449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.195 [2024-12-13 03:49:07.201617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.195 [2024-12-13 03:49:07.201632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.195 [2024-12-13 03:49:07.201858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.195 [2024-12-13 03:49:07.201872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.195 [2024-12-13 03:49:07.202030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.195 [2024-12-13 03:49:07.202044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.195 [2024-12-13 03:49:07.202256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.195 [2024-12-13 03:49:07.202270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.195 [2024-12-13 03:49:07.202472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.195 [2024-12-13 03:49:07.202486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.195 [2024-12-13 03:49:07.202643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.195 [2024-12-13 03:49:07.202657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.195 [2024-12-13 03:49:07.202831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.195 [2024-12-13 03:49:07.202845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.195 [2024-12-13 03:49:07.203049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.195 [2024-12-13 03:49:07.203095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.195 [2024-12-13 03:49:07.203311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.195 [2024-12-13 03:49:07.203362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.195 [2024-12-13 03:49:07.203674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.195 [2024-12-13 03:49:07.203726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.195 [2024-12-13 03:49:07.203899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.195 [2024-12-13 03:49:07.203928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.195 [2024-12-13 03:49:07.204030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.195 [2024-12-13 03:49:07.204051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.195 [2024-12-13 03:49:07.204262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.195 [2024-12-13 03:49:07.204278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.195 [2024-12-13 03:49:07.204367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.195 [2024-12-13 03:49:07.204383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.195 [2024-12-13 03:49:07.204482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.195 [2024-12-13 03:49:07.204496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.195 [2024-12-13 03:49:07.204629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.195 [2024-12-13 03:49:07.204643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.195 [2024-12-13 03:49:07.204728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.195 [2024-12-13 03:49:07.204741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.195 [2024-12-13 03:49:07.204964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.195 [2024-12-13 03:49:07.204978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.195 [2024-12-13 03:49:07.205056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.195 [2024-12-13 03:49:07.205069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.195 [2024-12-13 03:49:07.205168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.195 [2024-12-13 03:49:07.205181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.195 [2024-12-13 03:49:07.205266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.195 [2024-12-13 03:49:07.205279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.195 [2024-12-13 03:49:07.205370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.195 [2024-12-13 03:49:07.205382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.195 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.205480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.205493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.196 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.205577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.205592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.196 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.205739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.205752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.196 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.205856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.205870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.196 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.206024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.206038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.196 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.206188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.206202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.196 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.206358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.206371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.196 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.206454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.206467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.196 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.206667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.206681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.196 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.206761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.206773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.196 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.206866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.206879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.196 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.207022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.207042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.196 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.207222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.207235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.196 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.207313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.207325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.196 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.207409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.207421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.196 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.207500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.207511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.196 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.207595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.207608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.196 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.207687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.207700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.196 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.207857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.207870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.196 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.207963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.207975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.196 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.208062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.208075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.196 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.208157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.208170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.196 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.208324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.208337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.196 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.208422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.208435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.196 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.208502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.208514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.196 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.208648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.208661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.196 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.208798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.208813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.196 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.208877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.208890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.196 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.208974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.208988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.196 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.209067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.209079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.196 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.209220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.209236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.196 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.209326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.209340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.196 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.209431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.209443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.196 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.209543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.209558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.196 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.209633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.209645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.196 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.209782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.209796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.196 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.209936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.209950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.196 qpair failed and we were unable to recover it. 00:38:06.196 [2024-12-13 03:49:07.210042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.196 [2024-12-13 03:49:07.210054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.210136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.210148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.210221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.210232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.210382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.210396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.210487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.210499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.210566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.210577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.210668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.210682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.210754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.210767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.211016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.211031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.211104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.211116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.211212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.211226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.211364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.211381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.211591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.211605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.211830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.211843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.211915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.211939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.212146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.212161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.212234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.212246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.212330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.212342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.212490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.212504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.212573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.212587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.212719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.212733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.212805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.212818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.212901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.212913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.213116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.213160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.213298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.213339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.213469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.213511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.213786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.213829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.213977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.214020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.214194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.214237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.214471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.214512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.214704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.214747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.214896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.214947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.215154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.215195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.215389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.215429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.215688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.215739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.215881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.215895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.216143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.216186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.216431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.216474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.216606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.216662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.197 qpair failed and we were unable to recover it. 00:38:06.197 [2024-12-13 03:49:07.216941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.197 [2024-12-13 03:49:07.216985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.198 qpair failed and we were unable to recover it. 00:38:06.198 [2024-12-13 03:49:07.217128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.198 [2024-12-13 03:49:07.217143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.198 qpair failed and we were unable to recover it. 00:38:06.198 [2024-12-13 03:49:07.217345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.198 [2024-12-13 03:49:07.217359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.198 qpair failed and we were unable to recover it. 00:38:06.198 [2024-12-13 03:49:07.217509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.198 [2024-12-13 03:49:07.217522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.198 qpair failed and we were unable to recover it. 00:38:06.198 [2024-12-13 03:49:07.217654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.198 [2024-12-13 03:49:07.217668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.198 qpair failed and we were unable to recover it. 00:38:06.198 [2024-12-13 03:49:07.217814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.198 [2024-12-13 03:49:07.217827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.198 qpair failed and we were unable to recover it. 00:38:06.198 [2024-12-13 03:49:07.217925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.198 [2024-12-13 03:49:07.217938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.198 qpair failed and we were unable to recover it. 00:38:06.198 [2024-12-13 03:49:07.218087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.198 [2024-12-13 03:49:07.218100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.198 qpair failed and we were unable to recover it. 00:38:06.198 [2024-12-13 03:49:07.218177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.198 [2024-12-13 03:49:07.218189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.198 qpair failed and we were unable to recover it. 00:38:06.198 [2024-12-13 03:49:07.218266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.198 [2024-12-13 03:49:07.218279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.198 qpair failed and we were unable to recover it. 00:38:06.198 [2024-12-13 03:49:07.218374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.198 [2024-12-13 03:49:07.218387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.198 qpair failed and we were unable to recover it. 00:38:06.198 [2024-12-13 03:49:07.218482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.198 [2024-12-13 03:49:07.218496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.198 qpair failed and we were unable to recover it. 00:38:06.198 [2024-12-13 03:49:07.218561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.198 [2024-12-13 03:49:07.218573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.198 qpair failed and we were unable to recover it. 00:38:06.198 [2024-12-13 03:49:07.218647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.198 [2024-12-13 03:49:07.218659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.198 qpair failed and we were unable to recover it. 00:38:06.198 [2024-12-13 03:49:07.218740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.198 [2024-12-13 03:49:07.218753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.198 qpair failed and we were unable to recover it. 00:38:06.198 [2024-12-13 03:49:07.218820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.198 [2024-12-13 03:49:07.218832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.198 qpair failed and we were unable to recover it. 00:38:06.198 [2024-12-13 03:49:07.219053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.198 [2024-12-13 03:49:07.219068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.198 qpair failed and we were unable to recover it. 00:38:06.198 [2024-12-13 03:49:07.219273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.198 [2024-12-13 03:49:07.219287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.198 qpair failed and we were unable to recover it. 00:38:06.198 [2024-12-13 03:49:07.219367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.198 [2024-12-13 03:49:07.219379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.198 qpair failed and we were unable to recover it. 00:38:06.198 [2024-12-13 03:49:07.219589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.198 [2024-12-13 03:49:07.219603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.198 qpair failed and we were unable to recover it. 00:38:06.198 [2024-12-13 03:49:07.219678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.198 [2024-12-13 03:49:07.219689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.198 qpair failed and we were unable to recover it. 00:38:06.198 [2024-12-13 03:49:07.219774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.198 [2024-12-13 03:49:07.219786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.198 qpair failed and we were unable to recover it. 00:38:06.198 [2024-12-13 03:49:07.219874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.198 [2024-12-13 03:49:07.219889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.198 qpair failed and we were unable to recover it. 00:38:06.198 [2024-12-13 03:49:07.220117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.198 [2024-12-13 03:49:07.220133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.198 qpair failed and we were unable to recover it. 00:38:06.198 [2024-12-13 03:49:07.220223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.198 [2024-12-13 03:49:07.220237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.198 qpair failed and we were unable to recover it. 00:38:06.198 [2024-12-13 03:49:07.220371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.198 [2024-12-13 03:49:07.220385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.198 qpair failed and we were unable to recover it. 00:38:06.198 [2024-12-13 03:49:07.220526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.198 [2024-12-13 03:49:07.220540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.198 qpair failed and we were unable to recover it. 00:38:06.198 [2024-12-13 03:49:07.220619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.198 [2024-12-13 03:49:07.220632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.198 qpair failed and we were unable to recover it. 00:38:06.198 [2024-12-13 03:49:07.220715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.198 [2024-12-13 03:49:07.220727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.198 qpair failed and we were unable to recover it. 00:38:06.198 [2024-12-13 03:49:07.220796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.198 [2024-12-13 03:49:07.220808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.198 qpair failed and we were unable to recover it. 00:38:06.198 [2024-12-13 03:49:07.220880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.198 [2024-12-13 03:49:07.220892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.198 qpair failed and we were unable to recover it. 00:38:06.198 [2024-12-13 03:49:07.221057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.198 [2024-12-13 03:49:07.221072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.198 qpair failed and we were unable to recover it. 00:38:06.198 [2024-12-13 03:49:07.221155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.221178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.221246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.221258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.221328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.221341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.221431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.221446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.221520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.221532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.221618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.221632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.221704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.221716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.221875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.221888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.222114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.222128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.222223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.222236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.222293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.222306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.222368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.222381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.222452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.222464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.222562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.222575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.222730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.222743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.222894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.222907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.223110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.223124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.223310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.223339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.223494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.223507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.223645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.223659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.223798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.223812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.223884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.223897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.224054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.224067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.224126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.224138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.224350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.224363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.224498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.224511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.224642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.224655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.224755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.224768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.224849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.224862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.225017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.225031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.225177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.225191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.225271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.225292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.225376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.225391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.225474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.225487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.225576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.225589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.225672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.225685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.225769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.225782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.225862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.225875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.225975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.199 [2024-12-13 03:49:07.225988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.199 qpair failed and we were unable to recover it. 00:38:06.199 [2024-12-13 03:49:07.226156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.200 [2024-12-13 03:49:07.226170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.200 qpair failed and we were unable to recover it. 00:38:06.200 [2024-12-13 03:49:07.226255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.200 [2024-12-13 03:49:07.226269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.200 qpair failed and we were unable to recover it. 00:38:06.200 [2024-12-13 03:49:07.226407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.200 [2024-12-13 03:49:07.226419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.200 qpair failed and we were unable to recover it. 00:38:06.200 [2024-12-13 03:49:07.226493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.200 [2024-12-13 03:49:07.226508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.200 qpair failed and we were unable to recover it. 00:38:06.200 [2024-12-13 03:49:07.226650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.200 [2024-12-13 03:49:07.226665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.200 qpair failed and we were unable to recover it. 00:38:06.200 [2024-12-13 03:49:07.226815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.200 [2024-12-13 03:49:07.226829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.200 qpair failed and we were unable to recover it. 00:38:06.200 [2024-12-13 03:49:07.226980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.200 [2024-12-13 03:49:07.226994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.200 qpair failed and we were unable to recover it. 00:38:06.200 [2024-12-13 03:49:07.227064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.200 [2024-12-13 03:49:07.227076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.200 qpair failed and we were unable to recover it. 00:38:06.200 [2024-12-13 03:49:07.227202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.200 [2024-12-13 03:49:07.227226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.200 qpair failed and we were unable to recover it. 00:38:06.200 [2024-12-13 03:49:07.227303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.200 [2024-12-13 03:49:07.227316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.200 qpair failed and we were unable to recover it. 00:38:06.200 [2024-12-13 03:49:07.227401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.200 [2024-12-13 03:49:07.227413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.200 qpair failed and we were unable to recover it. 00:38:06.200 [2024-12-13 03:49:07.227563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.200 [2024-12-13 03:49:07.227577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.200 qpair failed and we were unable to recover it. 00:38:06.200 [2024-12-13 03:49:07.227657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.200 [2024-12-13 03:49:07.227670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.200 qpair failed and we were unable to recover it. 00:38:06.200 [2024-12-13 03:49:07.227873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.200 [2024-12-13 03:49:07.227887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.200 qpair failed and we were unable to recover it. 00:38:06.200 [2024-12-13 03:49:07.227954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.200 [2024-12-13 03:49:07.227966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.200 qpair failed and we were unable to recover it. 00:38:06.200 [2024-12-13 03:49:07.228047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.200 [2024-12-13 03:49:07.228063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.200 qpair failed and we were unable to recover it. 00:38:06.200 [2024-12-13 03:49:07.228153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.200 [2024-12-13 03:49:07.228174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.200 qpair failed and we were unable to recover it. 00:38:06.200 [2024-12-13 03:49:07.228238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.200 [2024-12-13 03:49:07.228251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.200 qpair failed and we were unable to recover it. 00:38:06.200 [2024-12-13 03:49:07.228336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.200 [2024-12-13 03:49:07.228351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.200 qpair failed and we were unable to recover it. 00:38:06.200 [2024-12-13 03:49:07.228439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.200 [2024-12-13 03:49:07.228452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.200 qpair failed and we were unable to recover it. 00:38:06.200 [2024-12-13 03:49:07.228591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.200 [2024-12-13 03:49:07.228605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.200 qpair failed and we were unable to recover it. 00:38:06.200 [2024-12-13 03:49:07.228738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.200 [2024-12-13 03:49:07.228751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.200 qpair failed and we were unable to recover it. 00:38:06.200 [2024-12-13 03:49:07.228898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.200 [2024-12-13 03:49:07.228913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.200 qpair failed and we were unable to recover it. 00:38:06.200 [2024-12-13 03:49:07.228992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.200 [2024-12-13 03:49:07.229007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.200 qpair failed and we were unable to recover it. 00:38:06.200 [2024-12-13 03:49:07.229086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.200 [2024-12-13 03:49:07.229100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.200 qpair failed and we were unable to recover it. 00:38:06.200 [2024-12-13 03:49:07.229273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.200 [2024-12-13 03:49:07.229287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.200 qpair failed and we were unable to recover it. 00:38:06.200 [2024-12-13 03:49:07.229360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.200 [2024-12-13 03:49:07.229373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.200 qpair failed and we were unable to recover it. 00:38:06.200 [2024-12-13 03:49:07.229573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.200 [2024-12-13 03:49:07.229587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.200 qpair failed and we were unable to recover it. 00:38:06.200 [2024-12-13 03:49:07.229654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.200 [2024-12-13 03:49:07.229666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.200 qpair failed and we were unable to recover it. 00:38:06.200 [2024-12-13 03:49:07.229732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.200 [2024-12-13 03:49:07.229748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.200 qpair failed and we were unable to recover it. 00:38:06.200 [2024-12-13 03:49:07.229877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.200 [2024-12-13 03:49:07.229892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.200 qpair failed and we were unable to recover it. 00:38:06.200 [2024-12-13 03:49:07.230037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.200 [2024-12-13 03:49:07.230050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.200 qpair failed and we were unable to recover it. 00:38:06.200 [2024-12-13 03:49:07.230200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.200 [2024-12-13 03:49:07.230214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.200 qpair failed and we were unable to recover it. 00:38:06.200 [2024-12-13 03:49:07.230349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.200 [2024-12-13 03:49:07.230363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.200 qpair failed and we were unable to recover it. 00:38:06.200 [2024-12-13 03:49:07.230512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.200 [2024-12-13 03:49:07.230553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.200 qpair failed and we were unable to recover it. 00:38:06.200 [2024-12-13 03:49:07.230759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.200 [2024-12-13 03:49:07.230801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.200 qpair failed and we were unable to recover it. 00:38:06.200 [2024-12-13 03:49:07.230998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.200 [2024-12-13 03:49:07.231012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.200 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.231073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.231087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.231285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.231300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.231363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.231374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.231448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.231460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.231627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.231641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.231816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.231828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.231983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.231998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.232073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.232090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.232169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.232185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.232261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.232275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.232375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.232388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.232541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.232554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.232702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.232752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.232891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.232973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.233176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.233218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.233357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.233399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.233627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.233668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.233952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.234001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.234265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.234279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.234420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.234438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.234588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.234601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.234675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.234688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.234778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.234791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.234990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.235004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.235147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.235160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.235226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.235238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.235407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.235420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.235505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.235518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.235665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.235678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.235763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.235776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.235864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.235877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.236013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.236027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.236169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.236182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.236387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.236401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.236537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.236551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.236705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.236718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.236898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.236911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.236998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.237011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.201 qpair failed and we were unable to recover it. 00:38:06.201 [2024-12-13 03:49:07.237157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.201 [2024-12-13 03:49:07.237170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.237236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.237248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.237324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.237346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.237436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.237447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.237586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.237600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.237662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.237674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.237743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.237755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.237843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.237855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.238006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.238020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.238119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.238135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.238302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.238316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.238477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.238490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.238572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.238585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.238673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.238685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.238756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.238770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.238845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.238858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.238935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.238949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.239032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.239045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.239204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.239216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.239288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.239301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.239371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.239383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.239519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.239532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.239617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.239630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.239690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.239703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.239766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.239778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.239936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.239950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.240172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.240186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.240320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.240334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.240575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.240588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.240661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.240674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.240809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.240822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.240910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.240934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.241077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.241091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.241157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.241169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.241235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.241246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.241313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.241326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.241401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.241415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.241493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.241506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.202 [2024-12-13 03:49:07.241648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.202 [2024-12-13 03:49:07.241662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.202 qpair failed and we were unable to recover it. 00:38:06.203 [2024-12-13 03:49:07.241793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.203 [2024-12-13 03:49:07.241806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.203 qpair failed and we were unable to recover it. 00:38:06.203 [2024-12-13 03:49:07.241982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.203 [2024-12-13 03:49:07.241996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.203 qpair failed and we were unable to recover it. 00:38:06.203 [2024-12-13 03:49:07.242129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.203 [2024-12-13 03:49:07.242142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.203 qpair failed and we were unable to recover it. 00:38:06.203 [2024-12-13 03:49:07.242288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.203 [2024-12-13 03:49:07.242301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.203 qpair failed and we were unable to recover it. 00:38:06.203 [2024-12-13 03:49:07.242427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.203 [2024-12-13 03:49:07.242440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.203 qpair failed and we were unable to recover it. 00:38:06.203 [2024-12-13 03:49:07.242605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.203 [2024-12-13 03:49:07.242619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.203 qpair failed and we were unable to recover it. 00:38:06.203 [2024-12-13 03:49:07.242818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.203 [2024-12-13 03:49:07.242836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.203 qpair failed and we were unable to recover it. 00:38:06.203 [2024-12-13 03:49:07.242972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.203 [2024-12-13 03:49:07.242986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.203 qpair failed and we were unable to recover it. 00:38:06.203 [2024-12-13 03:49:07.243149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.203 [2024-12-13 03:49:07.243162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.203 qpair failed and we were unable to recover it. 00:38:06.203 [2024-12-13 03:49:07.243313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.203 [2024-12-13 03:49:07.243326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.203 qpair failed and we were unable to recover it. 00:38:06.203 [2024-12-13 03:49:07.243551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.203 [2024-12-13 03:49:07.243568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.203 qpair failed and we were unable to recover it. 00:38:06.203 [2024-12-13 03:49:07.243649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.203 [2024-12-13 03:49:07.243662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.203 qpair failed and we were unable to recover it. 00:38:06.203 [2024-12-13 03:49:07.243930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.203 [2024-12-13 03:49:07.243944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.203 qpair failed and we were unable to recover it. 00:38:06.203 [2024-12-13 03:49:07.244153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.203 [2024-12-13 03:49:07.244168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.203 qpair failed and we were unable to recover it. 00:38:06.203 [2024-12-13 03:49:07.244251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.203 [2024-12-13 03:49:07.244264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.203 qpair failed and we were unable to recover it. 00:38:06.203 [2024-12-13 03:49:07.244484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.203 [2024-12-13 03:49:07.244498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.203 qpair failed and we were unable to recover it. 00:38:06.203 [2024-12-13 03:49:07.244674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.203 [2024-12-13 03:49:07.244688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.203 qpair failed and we were unable to recover it. 00:38:06.203 [2024-12-13 03:49:07.244836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.203 [2024-12-13 03:49:07.244850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.203 qpair failed and we were unable to recover it. 00:38:06.203 [2024-12-13 03:49:07.244986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.203 [2024-12-13 03:49:07.244999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.203 qpair failed and we were unable to recover it. 00:38:06.203 [2024-12-13 03:49:07.245081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.203 [2024-12-13 03:49:07.245094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.203 qpair failed and we were unable to recover it. 00:38:06.203 [2024-12-13 03:49:07.245248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.203 [2024-12-13 03:49:07.245260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.203 qpair failed and we were unable to recover it. 00:38:06.203 [2024-12-13 03:49:07.245340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.203 [2024-12-13 03:49:07.245354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.203 qpair failed and we were unable to recover it. 00:38:06.203 [2024-12-13 03:49:07.245493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.203 [2024-12-13 03:49:07.245506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.203 qpair failed and we were unable to recover it. 00:38:06.203 [2024-12-13 03:49:07.245723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.203 [2024-12-13 03:49:07.245736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.203 qpair failed and we were unable to recover it. 00:38:06.203 [2024-12-13 03:49:07.245980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.203 [2024-12-13 03:49:07.245995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.203 qpair failed and we were unable to recover it. 00:38:06.203 [2024-12-13 03:49:07.246254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.203 [2024-12-13 03:49:07.246269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.203 qpair failed and we were unable to recover it. 00:38:06.203 [2024-12-13 03:49:07.246434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.203 [2024-12-13 03:49:07.246449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.203 qpair failed and we were unable to recover it. 00:38:06.203 [2024-12-13 03:49:07.246602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.203 [2024-12-13 03:49:07.246614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.203 qpair failed and we were unable to recover it. 00:38:06.203 [2024-12-13 03:49:07.246750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.203 [2024-12-13 03:49:07.246764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.246860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.246873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.247088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.247101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.247237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.247250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.247497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.247510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.247688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.247701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.247867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.247881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.248037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.248051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.248210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.248223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.248418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.248462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.248605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.248651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.248866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.248911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.249026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.249041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.249118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.249131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.249280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.249294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.249493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.249506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.249674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.249687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.249771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.249784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.249928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.249941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.250076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.250091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.250174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.250187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.250275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.250288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.250444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.250459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.250660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.250674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.250855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.250867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.251001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.251014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.251157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.251171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.251368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.251381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.251448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.251460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.251676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.251689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.251912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.251931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.252020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.252034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.252188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.252201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.252353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.252366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.252469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.252482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.252675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.252689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.252840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.252853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.252948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.252963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.204 [2024-12-13 03:49:07.253049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.204 [2024-12-13 03:49:07.253062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.204 qpair failed and we were unable to recover it. 00:38:06.205 [2024-12-13 03:49:07.253182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.205 [2024-12-13 03:49:07.253195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.205 qpair failed and we were unable to recover it. 00:38:06.205 [2024-12-13 03:49:07.253263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.205 [2024-12-13 03:49:07.253275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.205 qpair failed and we were unable to recover it. 00:38:06.205 [2024-12-13 03:49:07.253424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.205 [2024-12-13 03:49:07.253437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.205 qpair failed and we were unable to recover it. 00:38:06.205 [2024-12-13 03:49:07.253594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.205 [2024-12-13 03:49:07.253607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.205 qpair failed and we were unable to recover it. 00:38:06.205 [2024-12-13 03:49:07.253772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.205 [2024-12-13 03:49:07.253786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.205 qpair failed and we were unable to recover it. 00:38:06.205 [2024-12-13 03:49:07.253960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.205 [2024-12-13 03:49:07.253975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.205 qpair failed and we were unable to recover it. 00:38:06.205 [2024-12-13 03:49:07.254188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.205 [2024-12-13 03:49:07.254206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.205 qpair failed and we were unable to recover it. 00:38:06.205 [2024-12-13 03:49:07.254301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.205 [2024-12-13 03:49:07.254315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.205 qpair failed and we were unable to recover it. 00:38:06.205 [2024-12-13 03:49:07.254402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.205 [2024-12-13 03:49:07.254415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.205 qpair failed and we were unable to recover it. 00:38:06.205 [2024-12-13 03:49:07.254502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.205 [2024-12-13 03:49:07.254515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.205 qpair failed and we were unable to recover it. 00:38:06.205 [2024-12-13 03:49:07.254697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.205 [2024-12-13 03:49:07.254723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.205 qpair failed and we were unable to recover it. 00:38:06.205 [2024-12-13 03:49:07.254835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.205 [2024-12-13 03:49:07.254864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.205 qpair failed and we were unable to recover it. 00:38:06.205 [2024-12-13 03:49:07.255102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.205 [2024-12-13 03:49:07.255129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.205 qpair failed and we were unable to recover it. 00:38:06.205 [2024-12-13 03:49:07.255411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.205 [2024-12-13 03:49:07.255426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.205 qpair failed and we were unable to recover it. 00:38:06.205 [2024-12-13 03:49:07.255643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.205 [2024-12-13 03:49:07.255658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.205 qpair failed and we were unable to recover it. 00:38:06.205 [2024-12-13 03:49:07.255808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.205 [2024-12-13 03:49:07.255821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.205 qpair failed and we were unable to recover it. 00:38:06.205 [2024-12-13 03:49:07.256069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.205 [2024-12-13 03:49:07.256083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.205 qpair failed and we were unable to recover it. 00:38:06.205 [2024-12-13 03:49:07.256293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.205 [2024-12-13 03:49:07.256306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.205 qpair failed and we were unable to recover it. 00:38:06.205 [2024-12-13 03:49:07.256520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.205 [2024-12-13 03:49:07.256534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.205 qpair failed and we were unable to recover it. 00:38:06.205 [2024-12-13 03:49:07.256734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.205 [2024-12-13 03:49:07.256747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.205 qpair failed and we were unable to recover it. 00:38:06.205 [2024-12-13 03:49:07.256893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.205 [2024-12-13 03:49:07.256906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.205 qpair failed and we were unable to recover it. 00:38:06.205 [2024-12-13 03:49:07.257072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.205 [2024-12-13 03:49:07.257087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.205 qpair failed and we were unable to recover it. 00:38:06.205 [2024-12-13 03:49:07.257238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.205 [2024-12-13 03:49:07.257251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.205 qpair failed and we were unable to recover it. 00:38:06.205 [2024-12-13 03:49:07.257354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.205 [2024-12-13 03:49:07.257372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.205 qpair failed and we were unable to recover it. 00:38:06.205 [2024-12-13 03:49:07.257471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.205 [2024-12-13 03:49:07.257484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.205 qpair failed and we were unable to recover it. 00:38:06.205 [2024-12-13 03:49:07.257631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.205 [2024-12-13 03:49:07.257645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.205 qpair failed and we were unable to recover it. 00:38:06.205 [2024-12-13 03:49:07.257800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.205 [2024-12-13 03:49:07.257813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.205 qpair failed and we were unable to recover it. 00:38:06.205 [2024-12-13 03:49:07.257942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.205 [2024-12-13 03:49:07.257956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.205 qpair failed and we were unable to recover it. 00:38:06.205 [2024-12-13 03:49:07.258197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.205 [2024-12-13 03:49:07.258210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.205 qpair failed and we were unable to recover it. 00:38:06.205 [2024-12-13 03:49:07.258313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.205 [2024-12-13 03:49:07.258326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.205 qpair failed and we were unable to recover it. 00:38:06.205 [2024-12-13 03:49:07.258492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.205 [2024-12-13 03:49:07.258507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.205 qpair failed and we were unable to recover it. 00:38:06.205 [2024-12-13 03:49:07.258659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.205 [2024-12-13 03:49:07.258671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.205 qpair failed and we were unable to recover it. 00:38:06.205 [2024-12-13 03:49:07.258823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.205 [2024-12-13 03:49:07.258837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.205 qpair failed and we were unable to recover it. 00:38:06.205 [2024-12-13 03:49:07.259060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.205 [2024-12-13 03:49:07.259074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.205 qpair failed and we were unable to recover it. 00:38:06.205 [2024-12-13 03:49:07.259247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.205 [2024-12-13 03:49:07.259260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.205 qpair failed and we were unable to recover it. 00:38:06.205 [2024-12-13 03:49:07.259482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.205 [2024-12-13 03:49:07.259495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.205 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.259723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.259736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.259813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.259826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.260053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.260066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.260232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.260246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.260445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.260458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.260683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.260696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.260779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.260794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.260988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.261003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.261175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.261188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.261402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.261416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.261576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.261589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.261800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.261814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.261909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.261928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.262061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.262074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.262318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.262344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.262604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.262628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.262860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.262881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.263067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.263082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.263246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.263259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.263352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.263366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.263599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.263612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.263775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.263792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.263892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.263906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.264133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.264147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.264323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.264336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.264548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.264561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.264783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.264796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.264998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.265014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.265256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.265270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.265434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.265447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.265526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.265538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.265660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.265673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.265764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.265777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.266028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.266042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.266243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.266255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.266501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.266515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.266677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.266691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.266869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.266883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.206 [2024-12-13 03:49:07.267066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.206 [2024-12-13 03:49:07.267078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.206 qpair failed and we were unable to recover it. 00:38:06.207 [2024-12-13 03:49:07.267337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.207 [2024-12-13 03:49:07.267353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.207 qpair failed and we were unable to recover it. 00:38:06.207 [2024-12-13 03:49:07.267491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.207 [2024-12-13 03:49:07.267510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.207 qpair failed and we were unable to recover it. 00:38:06.207 [2024-12-13 03:49:07.267736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.207 [2024-12-13 03:49:07.267751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.207 qpair failed and we were unable to recover it. 00:38:06.207 [2024-12-13 03:49:07.267915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.207 [2024-12-13 03:49:07.267934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.207 qpair failed and we were unable to recover it. 00:38:06.207 [2024-12-13 03:49:07.268135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.207 [2024-12-13 03:49:07.268148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.207 qpair failed and we were unable to recover it. 00:38:06.207 [2024-12-13 03:49:07.268365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.207 [2024-12-13 03:49:07.268379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.207 qpair failed and we were unable to recover it. 00:38:06.207 [2024-12-13 03:49:07.268554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.207 [2024-12-13 03:49:07.268567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.207 qpair failed and we were unable to recover it. 00:38:06.207 [2024-12-13 03:49:07.268713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.207 [2024-12-13 03:49:07.268727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.207 qpair failed and we were unable to recover it. 00:38:06.207 [2024-12-13 03:49:07.268898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.207 [2024-12-13 03:49:07.268912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.207 qpair failed and we were unable to recover it. 00:38:06.207 [2024-12-13 03:49:07.269068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.207 [2024-12-13 03:49:07.269082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.207 qpair failed and we were unable to recover it. 00:38:06.207 [2024-12-13 03:49:07.269165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.207 [2024-12-13 03:49:07.269177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.207 qpair failed and we were unable to recover it. 00:38:06.207 [2024-12-13 03:49:07.269362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.207 [2024-12-13 03:49:07.269377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.207 qpair failed and we were unable to recover it. 00:38:06.207 [2024-12-13 03:49:07.269603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.207 [2024-12-13 03:49:07.269616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.207 qpair failed and we were unable to recover it. 00:38:06.207 [2024-12-13 03:49:07.269825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.207 [2024-12-13 03:49:07.269838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.207 qpair failed and we were unable to recover it. 00:38:06.207 [2024-12-13 03:49:07.269983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.207 [2024-12-13 03:49:07.269998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.207 qpair failed and we were unable to recover it. 00:38:06.207 [2024-12-13 03:49:07.270156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.207 [2024-12-13 03:49:07.270170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.207 qpair failed and we were unable to recover it. 00:38:06.207 [2024-12-13 03:49:07.270261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.207 [2024-12-13 03:49:07.270274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.207 qpair failed and we were unable to recover it. 00:38:06.207 [2024-12-13 03:49:07.270445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.207 [2024-12-13 03:49:07.270460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.207 qpair failed and we were unable to recover it. 00:38:06.207 [2024-12-13 03:49:07.270612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.207 [2024-12-13 03:49:07.270627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.207 qpair failed and we were unable to recover it. 00:38:06.207 [2024-12-13 03:49:07.270852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.207 [2024-12-13 03:49:07.270866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.207 qpair failed and we were unable to recover it. 00:38:06.207 [2024-12-13 03:49:07.271069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.207 [2024-12-13 03:49:07.271084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.207 qpair failed and we were unable to recover it. 00:38:06.207 [2024-12-13 03:49:07.271238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.207 [2024-12-13 03:49:07.271252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.207 qpair failed and we were unable to recover it. 00:38:06.207 [2024-12-13 03:49:07.271324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.207 [2024-12-13 03:49:07.271335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.207 qpair failed and we were unable to recover it. 00:38:06.207 [2024-12-13 03:49:07.271607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.207 [2024-12-13 03:49:07.271621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.207 qpair failed and we were unable to recover it. 00:38:06.207 [2024-12-13 03:49:07.271823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.207 [2024-12-13 03:49:07.271836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.207 qpair failed and we were unable to recover it. 00:38:06.207 [2024-12-13 03:49:07.272036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.207 [2024-12-13 03:49:07.272050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.207 qpair failed and we were unable to recover it. 00:38:06.207 [2024-12-13 03:49:07.272270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.207 [2024-12-13 03:49:07.272284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.207 qpair failed and we were unable to recover it. 00:38:06.207 [2024-12-13 03:49:07.272501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.207 [2024-12-13 03:49:07.272515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.207 qpair failed and we were unable to recover it. 00:38:06.207 [2024-12-13 03:49:07.272666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.207 [2024-12-13 03:49:07.272680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.207 qpair failed and we were unable to recover it. 00:38:06.207 [2024-12-13 03:49:07.272829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.207 [2024-12-13 03:49:07.272845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.207 qpair failed and we were unable to recover it. 00:38:06.207 [2024-12-13 03:49:07.272998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.207 [2024-12-13 03:49:07.273012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.207 qpair failed and we were unable to recover it. 00:38:06.207 [2024-12-13 03:49:07.273110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.207 [2024-12-13 03:49:07.273123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.207 qpair failed and we were unable to recover it. 00:38:06.207 [2024-12-13 03:49:07.273260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.207 [2024-12-13 03:49:07.273273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.207 qpair failed and we were unable to recover it. 00:38:06.207 [2024-12-13 03:49:07.273421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.207 [2024-12-13 03:49:07.273435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.207 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.273513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.208 [2024-12-13 03:49:07.273527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.208 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.273731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.208 [2024-12-13 03:49:07.273745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.208 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.273950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.208 [2024-12-13 03:49:07.273965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.208 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.274064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.208 [2024-12-13 03:49:07.274080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.208 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.274298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.208 [2024-12-13 03:49:07.274313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.208 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.274382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.208 [2024-12-13 03:49:07.274394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.208 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.274479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.208 [2024-12-13 03:49:07.274492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.208 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.274629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.208 [2024-12-13 03:49:07.274644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.208 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.274903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.208 [2024-12-13 03:49:07.274921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.208 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.275163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.208 [2024-12-13 03:49:07.275177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.208 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.275266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.208 [2024-12-13 03:49:07.275279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.208 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.275523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.208 [2024-12-13 03:49:07.275537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.208 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.275796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.208 [2024-12-13 03:49:07.275811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.208 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.275956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.208 [2024-12-13 03:49:07.275970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.208 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.276147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.208 [2024-12-13 03:49:07.276161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.208 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.276356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.208 [2024-12-13 03:49:07.276370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.208 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.276597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.208 [2024-12-13 03:49:07.276612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.208 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.276788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.208 [2024-12-13 03:49:07.276801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.208 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.276959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.208 [2024-12-13 03:49:07.276973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.208 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.277062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.208 [2024-12-13 03:49:07.277075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.208 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.277226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.208 [2024-12-13 03:49:07.277241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.208 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.277389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.208 [2024-12-13 03:49:07.277407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.208 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.277627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.208 [2024-12-13 03:49:07.277642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.208 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.277730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.208 [2024-12-13 03:49:07.277743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.208 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.277825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.208 [2024-12-13 03:49:07.277837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.208 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.277914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.208 [2024-12-13 03:49:07.277933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.208 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.278021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.208 [2024-12-13 03:49:07.278034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.208 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.278127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.208 [2024-12-13 03:49:07.278140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.208 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.278231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.208 [2024-12-13 03:49:07.278243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.208 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.278330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.208 [2024-12-13 03:49:07.278342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.208 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.278451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.208 [2024-12-13 03:49:07.278465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.208 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.278538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.208 [2024-12-13 03:49:07.278555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.208 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.278733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.208 [2024-12-13 03:49:07.278747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.208 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.278959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.208 [2024-12-13 03:49:07.278973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.208 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.279141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.208 [2024-12-13 03:49:07.279155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.208 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.279302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.208 [2024-12-13 03:49:07.279316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.208 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.279555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.208 [2024-12-13 03:49:07.279569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.208 qpair failed and we were unable to recover it. 00:38:06.208 [2024-12-13 03:49:07.279773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.279789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.279858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.279872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.280014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.280029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.280113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.280125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.280273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.280288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.280381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.280394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.280529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.280542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.280688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.280702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.280835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.280849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.281000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.281014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.281087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.281100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.281281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.281296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.281404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.281419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.281681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.281697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.281908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.281932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.282155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.282170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.282340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.282354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.282565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.282578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.282788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.282802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.282941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.282956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.283161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.283175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.283414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.283427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.283655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.283669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.283821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.283835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.283992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.284009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.284236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.284250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.284468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.284483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.284661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.284675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.284879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.284894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.285068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.285084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.285185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.285199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.285338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.285352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.285506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.285520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.285664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.285679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.285836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.285850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.286084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.286111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.286265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.286279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.286461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.209 [2024-12-13 03:49:07.286475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.209 qpair failed and we were unable to recover it. 00:38:06.209 [2024-12-13 03:49:07.286707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.286721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.286910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.286935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.287074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.287088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.287241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.287256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.287426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.287440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.287598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.287615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.287765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.287780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.287867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.287881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.288055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.288071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.288225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.288238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.288325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.288339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.288556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.288572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.288723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.288738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.288883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.288898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.289078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.289093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.289229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.289243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.289327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.289341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.289542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.289556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.289781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.289796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.290052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.290068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.290154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.290171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.290262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.290276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.290436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.290452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.290608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.290622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.290776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.290791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.290929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.290945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.291092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.291107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.291341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.291355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.291560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.291574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.291797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.291811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.291966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.291982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.292184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.292198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.292405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.292419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.292586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.292600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.292757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.292772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.292945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.292960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.293161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.293175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.293273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.210 [2024-12-13 03:49:07.293288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.210 qpair failed and we were unable to recover it. 00:38:06.210 [2024-12-13 03:49:07.293518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.293532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.293687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.293703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.293876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.293891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.294050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.294068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.294229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.294243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.294490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.294504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.294674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.294688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.294870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.294885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.295114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.295130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.295276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.295290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.295489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.295503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.295666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.295680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.295883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.295899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.296126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.296141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.296345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.296359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.296590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.296605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.296856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.296870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.296962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.296977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.297113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.297127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.297221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.297235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.297429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.297443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.297533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.297545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.297642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.297658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.297816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.297831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.298058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.298073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.298298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.298312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.298473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.298487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.298640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.298656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.298874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.298890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.298979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.298992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.299067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.299081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.299234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.299248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.299395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.299409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.299567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.299581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.299715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.299729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.299862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.299875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.300009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.300024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.300183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.300196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.211 [2024-12-13 03:49:07.300412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.211 [2024-12-13 03:49:07.300426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.211 qpair failed and we were unable to recover it. 00:38:06.212 [2024-12-13 03:49:07.300513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.212 [2024-12-13 03:49:07.300527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.212 qpair failed and we were unable to recover it. 00:38:06.212 [2024-12-13 03:49:07.300599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.212 [2024-12-13 03:49:07.300611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.212 qpair failed and we were unable to recover it. 00:38:06.212 [2024-12-13 03:49:07.300763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.212 [2024-12-13 03:49:07.300778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.212 qpair failed and we were unable to recover it. 00:38:06.212 [2024-12-13 03:49:07.300866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.212 [2024-12-13 03:49:07.300879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.212 qpair failed and we were unable to recover it. 00:38:06.212 [2024-12-13 03:49:07.301131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.212 [2024-12-13 03:49:07.301147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.212 qpair failed and we were unable to recover it. 00:38:06.212 [2024-12-13 03:49:07.301322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.212 [2024-12-13 03:49:07.301337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.212 qpair failed and we were unable to recover it. 00:38:06.212 [2024-12-13 03:49:07.301563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.212 [2024-12-13 03:49:07.301579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.212 qpair failed and we were unable to recover it. 00:38:06.212 [2024-12-13 03:49:07.301735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.212 [2024-12-13 03:49:07.301755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.212 qpair failed and we were unable to recover it. 00:38:06.212 [2024-12-13 03:49:07.301913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.212 [2024-12-13 03:49:07.301933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.212 qpair failed and we were unable to recover it. 00:38:06.212 [2024-12-13 03:49:07.302157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.212 [2024-12-13 03:49:07.302172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.212 qpair failed and we were unable to recover it. 00:38:06.212 [2024-12-13 03:49:07.302323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.212 [2024-12-13 03:49:07.302338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.212 qpair failed and we were unable to recover it. 00:38:06.212 [2024-12-13 03:49:07.302600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.212 [2024-12-13 03:49:07.302615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.212 qpair failed and we were unable to recover it. 00:38:06.212 [2024-12-13 03:49:07.302758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.212 [2024-12-13 03:49:07.302772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.212 qpair failed and we were unable to recover it. 00:38:06.212 [2024-12-13 03:49:07.303002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.212 [2024-12-13 03:49:07.303016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.212 qpair failed and we were unable to recover it. 00:38:06.212 [2024-12-13 03:49:07.303169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.212 [2024-12-13 03:49:07.303183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.212 qpair failed and we were unable to recover it. 00:38:06.212 [2024-12-13 03:49:07.303388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.212 [2024-12-13 03:49:07.303403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.212 qpair failed and we were unable to recover it. 00:38:06.212 [2024-12-13 03:49:07.303628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.212 [2024-12-13 03:49:07.303642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.212 qpair failed and we were unable to recover it. 00:38:06.212 [2024-12-13 03:49:07.303797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.212 [2024-12-13 03:49:07.303811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.212 qpair failed and we were unable to recover it. 00:38:06.212 [2024-12-13 03:49:07.303949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.212 [2024-12-13 03:49:07.303964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.212 qpair failed and we were unable to recover it. 00:38:06.212 [2024-12-13 03:49:07.304117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.212 [2024-12-13 03:49:07.304132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.212 qpair failed and we were unable to recover it. 00:38:06.212 [2024-12-13 03:49:07.304266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.212 [2024-12-13 03:49:07.304279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.212 qpair failed and we were unable to recover it. 00:38:06.212 [2024-12-13 03:49:07.304454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.212 [2024-12-13 03:49:07.304469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.212 qpair failed and we were unable to recover it. 00:38:06.212 [2024-12-13 03:49:07.304540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.212 [2024-12-13 03:49:07.304554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.212 qpair failed and we were unable to recover it. 00:38:06.212 [2024-12-13 03:49:07.304775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.212 [2024-12-13 03:49:07.304790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.212 qpair failed and we were unable to recover it. 00:38:06.212 [2024-12-13 03:49:07.304887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.212 [2024-12-13 03:49:07.304901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.212 qpair failed and we were unable to recover it. 00:38:06.212 [2024-12-13 03:49:07.305057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.212 [2024-12-13 03:49:07.305072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.212 qpair failed and we were unable to recover it. 00:38:06.212 [2024-12-13 03:49:07.305291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.212 [2024-12-13 03:49:07.305306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.212 qpair failed and we were unable to recover it. 00:38:06.212 [2024-12-13 03:49:07.305452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.212 [2024-12-13 03:49:07.305467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.212 qpair failed and we were unable to recover it. 00:38:06.212 [2024-12-13 03:49:07.305548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.212 [2024-12-13 03:49:07.305562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.212 qpair failed and we were unable to recover it. 00:38:06.212 [2024-12-13 03:49:07.305641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.212 [2024-12-13 03:49:07.305656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.212 qpair failed and we were unable to recover it. 00:38:06.212 [2024-12-13 03:49:07.305884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.212 [2024-12-13 03:49:07.305899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.212 qpair failed and we were unable to recover it. 00:38:06.212 [2024-12-13 03:49:07.306164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.212 [2024-12-13 03:49:07.306179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.212 qpair failed and we were unable to recover it. 00:38:06.212 [2024-12-13 03:49:07.306331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.212 [2024-12-13 03:49:07.306346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.212 qpair failed and we were unable to recover it. 00:38:06.212 [2024-12-13 03:49:07.306531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.212 [2024-12-13 03:49:07.306546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.213 qpair failed and we were unable to recover it. 00:38:06.213 [2024-12-13 03:49:07.306692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.213 [2024-12-13 03:49:07.306706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.213 qpair failed and we were unable to recover it. 00:38:06.213 [2024-12-13 03:49:07.306851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.213 [2024-12-13 03:49:07.306866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.213 qpair failed and we were unable to recover it. 00:38:06.213 [2024-12-13 03:49:07.307024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.213 [2024-12-13 03:49:07.307038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.213 qpair failed and we were unable to recover it. 00:38:06.213 [2024-12-13 03:49:07.307187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.213 [2024-12-13 03:49:07.307201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.213 qpair failed and we were unable to recover it. 00:38:06.213 [2024-12-13 03:49:07.307343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.213 [2024-12-13 03:49:07.307358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.213 qpair failed and we were unable to recover it. 00:38:06.213 [2024-12-13 03:49:07.307580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.213 [2024-12-13 03:49:07.307594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.213 qpair failed and we were unable to recover it. 00:38:06.213 [2024-12-13 03:49:07.307681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.213 [2024-12-13 03:49:07.307695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.213 qpair failed and we were unable to recover it. 00:38:06.213 [2024-12-13 03:49:07.307768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.213 [2024-12-13 03:49:07.307781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.213 qpair failed and we were unable to recover it. 00:38:06.213 [2024-12-13 03:49:07.307933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.213 [2024-12-13 03:49:07.307965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.213 qpair failed and we were unable to recover it. 00:38:06.213 [2024-12-13 03:49:07.308124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.213 [2024-12-13 03:49:07.308139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.213 qpair failed and we were unable to recover it. 00:38:06.213 [2024-12-13 03:49:07.308231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.213 [2024-12-13 03:49:07.308244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.213 qpair failed and we were unable to recover it. 00:38:06.213 [2024-12-13 03:49:07.308421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.213 [2024-12-13 03:49:07.308437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.213 qpair failed and we were unable to recover it. 00:38:06.213 [2024-12-13 03:49:07.308537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.213 [2024-12-13 03:49:07.308552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.213 qpair failed and we were unable to recover it. 00:38:06.213 [2024-12-13 03:49:07.308640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.213 [2024-12-13 03:49:07.308653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.213 qpair failed and we were unable to recover it. 00:38:06.213 [2024-12-13 03:49:07.308858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.213 [2024-12-13 03:49:07.308873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.213 qpair failed and we were unable to recover it. 00:38:06.213 [2024-12-13 03:49:07.308975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.213 [2024-12-13 03:49:07.308991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.213 qpair failed and we were unable to recover it. 00:38:06.213 [2024-12-13 03:49:07.309142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.213 [2024-12-13 03:49:07.309156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.213 qpair failed and we were unable to recover it. 00:38:06.213 [2024-12-13 03:49:07.309234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.213 [2024-12-13 03:49:07.309250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.213 qpair failed and we were unable to recover it. 00:38:06.213 [2024-12-13 03:49:07.309329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.213 [2024-12-13 03:49:07.309343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.213 qpair failed and we were unable to recover it. 00:38:06.213 [2024-12-13 03:49:07.309492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.213 [2024-12-13 03:49:07.309506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.213 qpair failed and we were unable to recover it. 00:38:06.213 [2024-12-13 03:49:07.309650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.213 [2024-12-13 03:49:07.309666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.213 qpair failed and we were unable to recover it. 00:38:06.213 [2024-12-13 03:49:07.309761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.213 [2024-12-13 03:49:07.309777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.213 qpair failed and we were unable to recover it. 00:38:06.213 [2024-12-13 03:49:07.309868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.213 [2024-12-13 03:49:07.309882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.213 qpair failed and we were unable to recover it. 00:38:06.213 [2024-12-13 03:49:07.309982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.213 [2024-12-13 03:49:07.309998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.213 qpair failed and we were unable to recover it. 00:38:06.213 [2024-12-13 03:49:07.310086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.213 [2024-12-13 03:49:07.310099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.213 qpair failed and we were unable to recover it. 00:38:06.213 [2024-12-13 03:49:07.310197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.213 [2024-12-13 03:49:07.310213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.213 qpair failed and we were unable to recover it. 00:38:06.213 [2024-12-13 03:49:07.310296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.213 [2024-12-13 03:49:07.310310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.213 qpair failed and we were unable to recover it. 00:38:06.213 [2024-12-13 03:49:07.310375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.213 [2024-12-13 03:49:07.310389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.213 qpair failed and we were unable to recover it. 00:38:06.213 [2024-12-13 03:49:07.310473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.213 [2024-12-13 03:49:07.310487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.213 qpair failed and we were unable to recover it. 00:38:06.213 [2024-12-13 03:49:07.310571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.213 [2024-12-13 03:49:07.310584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.213 qpair failed and we were unable to recover it. 00:38:06.213 [2024-12-13 03:49:07.310673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.213 [2024-12-13 03:49:07.310687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.213 qpair failed and we were unable to recover it. 00:38:06.213 [2024-12-13 03:49:07.310835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.213 [2024-12-13 03:49:07.310849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.213 qpair failed and we were unable to recover it. 00:38:06.214 [2024-12-13 03:49:07.310937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.214 [2024-12-13 03:49:07.310951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.214 qpair failed and we were unable to recover it. 00:38:06.214 [2024-12-13 03:49:07.311036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.214 [2024-12-13 03:49:07.311049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.214 qpair failed and we were unable to recover it. 00:38:06.214 [2024-12-13 03:49:07.311126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.214 [2024-12-13 03:49:07.311139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.214 qpair failed and we were unable to recover it. 00:38:06.214 [2024-12-13 03:49:07.311221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.214 [2024-12-13 03:49:07.311236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.214 qpair failed and we were unable to recover it. 00:38:06.214 [2024-12-13 03:49:07.311321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.214 [2024-12-13 03:49:07.311335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.214 qpair failed and we were unable to recover it. 00:38:06.214 [2024-12-13 03:49:07.311428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.214 [2024-12-13 03:49:07.311449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.214 qpair failed and we were unable to recover it. 00:38:06.214 [2024-12-13 03:49:07.311517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.214 [2024-12-13 03:49:07.311531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.214 qpair failed and we were unable to recover it. 00:38:06.214 [2024-12-13 03:49:07.311684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.214 [2024-12-13 03:49:07.311698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.214 qpair failed and we were unable to recover it. 00:38:06.214 [2024-12-13 03:49:07.311783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.214 [2024-12-13 03:49:07.311797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.214 qpair failed and we were unable to recover it. 00:38:06.214 [2024-12-13 03:49:07.311864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.214 [2024-12-13 03:49:07.311877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.214 qpair failed and we were unable to recover it. 00:38:06.214 [2024-12-13 03:49:07.312019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.214 [2024-12-13 03:49:07.312035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.214 qpair failed and we were unable to recover it. 00:38:06.214 [2024-12-13 03:49:07.312194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.214 [2024-12-13 03:49:07.312209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.214 qpair failed and we were unable to recover it. 00:38:06.214 [2024-12-13 03:49:07.312288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.214 [2024-12-13 03:49:07.312302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.214 qpair failed and we were unable to recover it. 00:38:06.214 [2024-12-13 03:49:07.312408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.214 [2024-12-13 03:49:07.312421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.214 qpair failed and we were unable to recover it. 00:38:06.214 [2024-12-13 03:49:07.312573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.214 [2024-12-13 03:49:07.312588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.214 qpair failed and we were unable to recover it. 00:38:06.214 [2024-12-13 03:49:07.312730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.214 [2024-12-13 03:49:07.312745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.214 qpair failed and we were unable to recover it. 00:38:06.214 [2024-12-13 03:49:07.312808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.214 [2024-12-13 03:49:07.312823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.214 qpair failed and we were unable to recover it. 00:38:06.214 [2024-12-13 03:49:07.312894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.214 [2024-12-13 03:49:07.312907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.214 qpair failed and we were unable to recover it. 00:38:06.214 [2024-12-13 03:49:07.313067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.214 [2024-12-13 03:49:07.313082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.214 qpair failed and we were unable to recover it. 00:38:06.214 [2024-12-13 03:49:07.313184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.214 [2024-12-13 03:49:07.313198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.214 qpair failed and we were unable to recover it. 00:38:06.214 [2024-12-13 03:49:07.313351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.214 [2024-12-13 03:49:07.313365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.214 qpair failed and we were unable to recover it. 00:38:06.214 [2024-12-13 03:49:07.313435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.214 [2024-12-13 03:49:07.313449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.214 qpair failed and we were unable to recover it. 00:38:06.214 [2024-12-13 03:49:07.313583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.214 [2024-12-13 03:49:07.313597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.214 qpair failed and we were unable to recover it. 00:38:06.214 [2024-12-13 03:49:07.313766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.214 [2024-12-13 03:49:07.313781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.214 qpair failed and we were unable to recover it. 00:38:06.214 [2024-12-13 03:49:07.313862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.214 [2024-12-13 03:49:07.313876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.214 qpair failed and we were unable to recover it. 00:38:06.214 [2024-12-13 03:49:07.314048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.214 [2024-12-13 03:49:07.314063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.214 qpair failed and we were unable to recover it. 00:38:06.214 [2024-12-13 03:49:07.314213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.214 [2024-12-13 03:49:07.314228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.214 qpair failed and we were unable to recover it. 00:38:06.214 [2024-12-13 03:49:07.314302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.214 [2024-12-13 03:49:07.314316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.214 qpair failed and we were unable to recover it. 00:38:06.214 [2024-12-13 03:49:07.314393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.214 [2024-12-13 03:49:07.314407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.214 qpair failed and we were unable to recover it. 00:38:06.214 [2024-12-13 03:49:07.314487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.214 [2024-12-13 03:49:07.314501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.214 qpair failed and we were unable to recover it. 00:38:06.214 [2024-12-13 03:49:07.314570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.214 [2024-12-13 03:49:07.314583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.214 qpair failed and we were unable to recover it. 00:38:06.214 [2024-12-13 03:49:07.314658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.214 [2024-12-13 03:49:07.314672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.214 qpair failed and we were unable to recover it. 00:38:06.214 [2024-12-13 03:49:07.314752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.214 [2024-12-13 03:49:07.314766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.214 qpair failed and we were unable to recover it. 00:38:06.214 [2024-12-13 03:49:07.314856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.214 [2024-12-13 03:49:07.314868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.214 qpair failed and we were unable to recover it. 00:38:06.214 [2024-12-13 03:49:07.314966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.215 [2024-12-13 03:49:07.314980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.215 qpair failed and we were unable to recover it. 00:38:06.215 [2024-12-13 03:49:07.315063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.215 [2024-12-13 03:49:07.315076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.215 qpair failed and we were unable to recover it. 00:38:06.215 [2024-12-13 03:49:07.315215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.215 [2024-12-13 03:49:07.315228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.215 qpair failed and we were unable to recover it. 00:38:06.215 [2024-12-13 03:49:07.315317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.215 [2024-12-13 03:49:07.315331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.215 qpair failed and we were unable to recover it. 00:38:06.215 [2024-12-13 03:49:07.315482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.215 [2024-12-13 03:49:07.315496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.215 qpair failed and we were unable to recover it. 00:38:06.215 [2024-12-13 03:49:07.315579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.215 [2024-12-13 03:49:07.315592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.215 qpair failed and we were unable to recover it. 00:38:06.215 [2024-12-13 03:49:07.315731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.215 [2024-12-13 03:49:07.315745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.215 qpair failed and we were unable to recover it. 00:38:06.215 [2024-12-13 03:49:07.315881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.215 [2024-12-13 03:49:07.315897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.215 qpair failed and we were unable to recover it. 00:38:06.215 [2024-12-13 03:49:07.315967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.215 [2024-12-13 03:49:07.315982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.215 qpair failed and we were unable to recover it. 00:38:06.215 [2024-12-13 03:49:07.316118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.215 [2024-12-13 03:49:07.316136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.215 qpair failed and we were unable to recover it. 00:38:06.215 [2024-12-13 03:49:07.316282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.215 [2024-12-13 03:49:07.316298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.215 qpair failed and we were unable to recover it. 00:38:06.215 [2024-12-13 03:49:07.316452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.215 [2024-12-13 03:49:07.316467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.215 qpair failed and we were unable to recover it. 00:38:06.215 [2024-12-13 03:49:07.316534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.215 [2024-12-13 03:49:07.316547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.215 qpair failed and we were unable to recover it. 00:38:06.215 [2024-12-13 03:49:07.316699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.215 [2024-12-13 03:49:07.316715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.215 qpair failed and we were unable to recover it. 00:38:06.215 [2024-12-13 03:49:07.316788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.215 [2024-12-13 03:49:07.316803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.215 qpair failed and we were unable to recover it. 00:38:06.215 [2024-12-13 03:49:07.316871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.215 [2024-12-13 03:49:07.316885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.215 qpair failed and we were unable to recover it. 00:38:06.215 [2024-12-13 03:49:07.316972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.215 [2024-12-13 03:49:07.316986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.215 qpair failed and we were unable to recover it. 00:38:06.215 [2024-12-13 03:49:07.317057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.215 [2024-12-13 03:49:07.317072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.215 qpair failed and we were unable to recover it. 00:38:06.215 [2024-12-13 03:49:07.317141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.215 [2024-12-13 03:49:07.317154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.215 qpair failed and we were unable to recover it. 00:38:06.215 [2024-12-13 03:49:07.317231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.215 [2024-12-13 03:49:07.317247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.215 qpair failed and we were unable to recover it. 00:38:06.215 [2024-12-13 03:49:07.317387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.215 [2024-12-13 03:49:07.317402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.215 qpair failed and we were unable to recover it. 00:38:06.215 [2024-12-13 03:49:07.317468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.215 [2024-12-13 03:49:07.317482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.215 qpair failed and we were unable to recover it. 00:38:06.215 [2024-12-13 03:49:07.317558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.215 [2024-12-13 03:49:07.317573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.215 qpair failed and we were unable to recover it. 00:38:06.215 [2024-12-13 03:49:07.317651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.215 [2024-12-13 03:49:07.317667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.215 qpair failed and we were unable to recover it. 00:38:06.215 [2024-12-13 03:49:07.317762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.215 [2024-12-13 03:49:07.317777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.215 qpair failed and we were unable to recover it. 00:38:06.215 [2024-12-13 03:49:07.317886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.215 [2024-12-13 03:49:07.317900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.215 qpair failed and we were unable to recover it. 00:38:06.215 [2024-12-13 03:49:07.318051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.215 [2024-12-13 03:49:07.318068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.215 qpair failed and we were unable to recover it. 00:38:06.215 [2024-12-13 03:49:07.318219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.215 [2024-12-13 03:49:07.318232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.215 qpair failed and we were unable to recover it. 00:38:06.215 [2024-12-13 03:49:07.318316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.215 [2024-12-13 03:49:07.318331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.215 qpair failed and we were unable to recover it. 00:38:06.215 [2024-12-13 03:49:07.318399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.215 [2024-12-13 03:49:07.318411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.215 qpair failed and we were unable to recover it. 00:38:06.215 [2024-12-13 03:49:07.318486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.215 [2024-12-13 03:49:07.318500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.215 qpair failed and we were unable to recover it. 00:38:06.215 [2024-12-13 03:49:07.318642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.215 [2024-12-13 03:49:07.318655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.215 qpair failed and we were unable to recover it. 00:38:06.215 [2024-12-13 03:49:07.318805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.215 [2024-12-13 03:49:07.318819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.215 qpair failed and we were unable to recover it. 00:38:06.215 [2024-12-13 03:49:07.318952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.215 [2024-12-13 03:49:07.318966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.215 qpair failed and we were unable to recover it. 00:38:06.215 [2024-12-13 03:49:07.319067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.215 [2024-12-13 03:49:07.319082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.215 qpair failed and we were unable to recover it. 00:38:06.216 [2024-12-13 03:49:07.319153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.216 [2024-12-13 03:49:07.319177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.216 qpair failed and we were unable to recover it. 00:38:06.216 [2024-12-13 03:49:07.319318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.216 [2024-12-13 03:49:07.319332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.216 qpair failed and we were unable to recover it. 00:38:06.216 [2024-12-13 03:49:07.319411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.216 [2024-12-13 03:49:07.319427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.216 qpair failed and we were unable to recover it. 00:38:06.216 [2024-12-13 03:49:07.319504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.216 [2024-12-13 03:49:07.319518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.216 qpair failed and we were unable to recover it. 00:38:06.216 [2024-12-13 03:49:07.319601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.216 [2024-12-13 03:49:07.319615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.216 qpair failed and we were unable to recover it. 00:38:06.216 [2024-12-13 03:49:07.319696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.216 [2024-12-13 03:49:07.319710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.216 qpair failed and we were unable to recover it. 00:38:06.216 [2024-12-13 03:49:07.319878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.216 [2024-12-13 03:49:07.319893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.216 qpair failed and we were unable to recover it. 00:38:06.216 [2024-12-13 03:49:07.320060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.216 [2024-12-13 03:49:07.320075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.216 qpair failed and we were unable to recover it. 00:38:06.216 [2024-12-13 03:49:07.320158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.216 [2024-12-13 03:49:07.320173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.216 qpair failed and we were unable to recover it. 00:38:06.216 [2024-12-13 03:49:07.320247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.216 [2024-12-13 03:49:07.320261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.216 qpair failed and we were unable to recover it. 00:38:06.216 [2024-12-13 03:49:07.320403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.216 [2024-12-13 03:49:07.320434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.216 qpair failed and we were unable to recover it. 00:38:06.216 [2024-12-13 03:49:07.320575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.216 [2024-12-13 03:49:07.320589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.216 qpair failed and we were unable to recover it. 00:38:06.216 [2024-12-13 03:49:07.320664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.216 [2024-12-13 03:49:07.320680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.216 qpair failed and we were unable to recover it. 00:38:06.216 [2024-12-13 03:49:07.320749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.216 [2024-12-13 03:49:07.320763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.216 qpair failed and we were unable to recover it. 00:38:06.216 [2024-12-13 03:49:07.320854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.216 [2024-12-13 03:49:07.320872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.216 qpair failed and we were unable to recover it. 00:38:06.216 [2024-12-13 03:49:07.321018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.216 [2024-12-13 03:49:07.321034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.216 qpair failed and we were unable to recover it. 00:38:06.216 [2024-12-13 03:49:07.321126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.216 [2024-12-13 03:49:07.321140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.216 qpair failed and we were unable to recover it. 00:38:06.216 [2024-12-13 03:49:07.321211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.216 [2024-12-13 03:49:07.321224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.216 qpair failed and we were unable to recover it. 00:38:06.216 [2024-12-13 03:49:07.321296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.216 [2024-12-13 03:49:07.321310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.216 qpair failed and we were unable to recover it. 00:38:06.216 [2024-12-13 03:49:07.321422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.216 [2024-12-13 03:49:07.321435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.216 qpair failed and we were unable to recover it. 00:38:06.216 [2024-12-13 03:49:07.321515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.216 [2024-12-13 03:49:07.321529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.216 qpair failed and we were unable to recover it. 00:38:06.216 [2024-12-13 03:49:07.321612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.216 [2024-12-13 03:49:07.321626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.216 qpair failed and we were unable to recover it. 00:38:06.216 [2024-12-13 03:49:07.321715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.216 [2024-12-13 03:49:07.321728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.216 qpair failed and we were unable to recover it. 00:38:06.216 [2024-12-13 03:49:07.321812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.216 [2024-12-13 03:49:07.321827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.216 qpair failed and we were unable to recover it. 00:38:06.216 [2024-12-13 03:49:07.321912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.216 [2024-12-13 03:49:07.321940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.216 qpair failed and we were unable to recover it. 00:38:06.216 [2024-12-13 03:49:07.322146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.216 [2024-12-13 03:49:07.322162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.216 qpair failed and we were unable to recover it. 00:38:06.216 [2024-12-13 03:49:07.322230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.216 [2024-12-13 03:49:07.322244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.216 qpair failed and we were unable to recover it. 00:38:06.216 [2024-12-13 03:49:07.322505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.216 [2024-12-13 03:49:07.322520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.216 qpair failed and we were unable to recover it. 00:38:06.216 [2024-12-13 03:49:07.322607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.216 [2024-12-13 03:49:07.322622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.216 qpair failed and we were unable to recover it. 00:38:06.216 [2024-12-13 03:49:07.322721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.216 [2024-12-13 03:49:07.322736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.216 qpair failed and we were unable to recover it. 00:38:06.216 [2024-12-13 03:49:07.322834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.216 [2024-12-13 03:49:07.322848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.216 qpair failed and we were unable to recover it. 00:38:06.216 [2024-12-13 03:49:07.322936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.216 [2024-12-13 03:49:07.322952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.216 qpair failed and we were unable to recover it. 00:38:06.216 [2024-12-13 03:49:07.323042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.216 [2024-12-13 03:49:07.323057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.216 qpair failed and we were unable to recover it. 00:38:06.216 [2024-12-13 03:49:07.323144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.216 [2024-12-13 03:49:07.323157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.216 qpair failed and we were unable to recover it. 00:38:06.216 [2024-12-13 03:49:07.323358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.217 [2024-12-13 03:49:07.323373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.217 qpair failed and we were unable to recover it. 00:38:06.217 [2024-12-13 03:49:07.323450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.217 [2024-12-13 03:49:07.323464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.217 qpair failed and we were unable to recover it. 00:38:06.217 [2024-12-13 03:49:07.323661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.217 [2024-12-13 03:49:07.323675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.217 qpair failed and we were unable to recover it. 00:38:06.217 [2024-12-13 03:49:07.323813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.217 [2024-12-13 03:49:07.323827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.217 qpair failed and we were unable to recover it. 00:38:06.217 [2024-12-13 03:49:07.323926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.217 [2024-12-13 03:49:07.323941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.217 qpair failed and we were unable to recover it. 00:38:06.217 [2024-12-13 03:49:07.324021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.217 [2024-12-13 03:49:07.324034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.217 qpair failed and we were unable to recover it. 00:38:06.217 [2024-12-13 03:49:07.324235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.217 [2024-12-13 03:49:07.324250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.217 qpair failed and we were unable to recover it. 00:38:06.217 [2024-12-13 03:49:07.324356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.217 [2024-12-13 03:49:07.324388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.217 qpair failed and we were unable to recover it. 00:38:06.217 [2024-12-13 03:49:07.324576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.217 [2024-12-13 03:49:07.324611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.217 qpair failed and we were unable to recover it. 00:38:06.217 [2024-12-13 03:49:07.324732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.217 [2024-12-13 03:49:07.324763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.217 qpair failed and we were unable to recover it. 00:38:06.217 [2024-12-13 03:49:07.324936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.217 [2024-12-13 03:49:07.324952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.217 qpair failed and we were unable to recover it. 00:38:06.217 [2024-12-13 03:49:07.325056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.217 [2024-12-13 03:49:07.325071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.217 qpair failed and we were unable to recover it. 00:38:06.217 [2024-12-13 03:49:07.325155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.217 [2024-12-13 03:49:07.325170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.217 qpair failed and we were unable to recover it. 00:38:06.217 [2024-12-13 03:49:07.325255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.217 [2024-12-13 03:49:07.325275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.217 qpair failed and we were unable to recover it. 00:38:06.217 [2024-12-13 03:49:07.325370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.217 [2024-12-13 03:49:07.325385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.217 qpair failed and we were unable to recover it. 00:38:06.217 [2024-12-13 03:49:07.325480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.217 [2024-12-13 03:49:07.325494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.217 qpair failed and we were unable to recover it. 00:38:06.217 [2024-12-13 03:49:07.325630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.217 [2024-12-13 03:49:07.325646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.217 qpair failed and we were unable to recover it. 00:38:06.217 [2024-12-13 03:49:07.325738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.217 [2024-12-13 03:49:07.325753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.217 qpair failed and we were unable to recover it. 00:38:06.217 [2024-12-13 03:49:07.325826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.217 [2024-12-13 03:49:07.325840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.217 qpair failed and we were unable to recover it. 00:38:06.217 [2024-12-13 03:49:07.325931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.217 [2024-12-13 03:49:07.325946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.217 qpair failed and we were unable to recover it. 00:38:06.217 [2024-12-13 03:49:07.326090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.217 [2024-12-13 03:49:07.326107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.217 qpair failed and we were unable to recover it. 00:38:06.217 [2024-12-13 03:49:07.326193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.217 [2024-12-13 03:49:07.326208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.217 qpair failed and we were unable to recover it. 00:38:06.217 [2024-12-13 03:49:07.326352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.217 [2024-12-13 03:49:07.326375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.217 qpair failed and we were unable to recover it. 00:38:06.217 [2024-12-13 03:49:07.326508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.217 [2024-12-13 03:49:07.326522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.217 qpair failed and we were unable to recover it. 00:38:06.217 [2024-12-13 03:49:07.326668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.217 [2024-12-13 03:49:07.326682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.217 qpair failed and we were unable to recover it. 00:38:06.217 [2024-12-13 03:49:07.326764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.217 [2024-12-13 03:49:07.326777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.217 qpair failed and we were unable to recover it. 00:38:06.217 [2024-12-13 03:49:07.326847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.217 [2024-12-13 03:49:07.326861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.217 qpair failed and we were unable to recover it. 00:38:06.217 [2024-12-13 03:49:07.326937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.217 [2024-12-13 03:49:07.326951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.217 qpair failed and we were unable to recover it. 00:38:06.217 [2024-12-13 03:49:07.327023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.217 [2024-12-13 03:49:07.327036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.218 qpair failed and we were unable to recover it. 00:38:06.218 [2024-12-13 03:49:07.327190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.218 [2024-12-13 03:49:07.327204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.218 qpair failed and we were unable to recover it. 00:38:06.218 [2024-12-13 03:49:07.327285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.218 [2024-12-13 03:49:07.327298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.218 qpair failed and we were unable to recover it. 00:38:06.218 [2024-12-13 03:49:07.327453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.218 [2024-12-13 03:49:07.327466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.218 qpair failed and we were unable to recover it. 00:38:06.218 [2024-12-13 03:49:07.327538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.218 [2024-12-13 03:49:07.327552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.218 qpair failed and we were unable to recover it. 00:38:06.218 [2024-12-13 03:49:07.327636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.218 [2024-12-13 03:49:07.327674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.218 qpair failed and we were unable to recover it. 00:38:06.218 [2024-12-13 03:49:07.327826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.218 [2024-12-13 03:49:07.327840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.218 qpair failed and we were unable to recover it. 00:38:06.218 [2024-12-13 03:49:07.327925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.218 [2024-12-13 03:49:07.327940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.218 qpair failed and we were unable to recover it. 00:38:06.218 [2024-12-13 03:49:07.328081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.218 [2024-12-13 03:49:07.328094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.218 qpair failed and we were unable to recover it. 00:38:06.218 [2024-12-13 03:49:07.328166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.218 [2024-12-13 03:49:07.328181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.218 qpair failed and we were unable to recover it. 00:38:06.218 [2024-12-13 03:49:07.328404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.218 [2024-12-13 03:49:07.328418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.218 qpair failed and we were unable to recover it. 00:38:06.218 [2024-12-13 03:49:07.328554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.218 [2024-12-13 03:49:07.328568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.218 qpair failed and we were unable to recover it. 00:38:06.218 [2024-12-13 03:49:07.328632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.218 [2024-12-13 03:49:07.328646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.218 qpair failed and we were unable to recover it. 00:38:06.218 [2024-12-13 03:49:07.328793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.218 [2024-12-13 03:49:07.328807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.218 qpair failed and we were unable to recover it. 00:38:06.218 [2024-12-13 03:49:07.328960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.218 [2024-12-13 03:49:07.328974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.218 qpair failed and we were unable to recover it. 00:38:06.218 [2024-12-13 03:49:07.329079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.218 [2024-12-13 03:49:07.329093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.218 qpair failed and we were unable to recover it. 00:38:06.218 [2024-12-13 03:49:07.329249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.218 [2024-12-13 03:49:07.329264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.218 qpair failed and we were unable to recover it. 00:38:06.218 [2024-12-13 03:49:07.329420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.218 [2024-12-13 03:49:07.329435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.218 qpair failed and we were unable to recover it. 00:38:06.218 [2024-12-13 03:49:07.329505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.218 [2024-12-13 03:49:07.329519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.218 qpair failed and we were unable to recover it. 00:38:06.218 [2024-12-13 03:49:07.329616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.218 [2024-12-13 03:49:07.329642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.218 qpair failed and we were unable to recover it. 00:38:06.218 [2024-12-13 03:49:07.329754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.218 [2024-12-13 03:49:07.329780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.218 qpair failed and we were unable to recover it. 00:38:06.218 [2024-12-13 03:49:07.329909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.218 [2024-12-13 03:49:07.329944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.218 qpair failed and we were unable to recover it. 00:38:06.218 [2024-12-13 03:49:07.330107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.218 [2024-12-13 03:49:07.330125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.218 qpair failed and we were unable to recover it. 00:38:06.218 [2024-12-13 03:49:07.330225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.218 [2024-12-13 03:49:07.330240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.218 qpair failed and we were unable to recover it. 00:38:06.218 [2024-12-13 03:49:07.330391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.218 [2024-12-13 03:49:07.330406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.218 qpair failed and we were unable to recover it. 00:38:06.218 [2024-12-13 03:49:07.330543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.218 [2024-12-13 03:49:07.330558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.218 qpair failed and we were unable to recover it. 00:38:06.218 [2024-12-13 03:49:07.330644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.218 [2024-12-13 03:49:07.330658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.218 qpair failed and we were unable to recover it. 00:38:06.218 [2024-12-13 03:49:07.330798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.218 [2024-12-13 03:49:07.330813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.218 qpair failed and we were unable to recover it. 00:38:06.218 [2024-12-13 03:49:07.330935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.218 [2024-12-13 03:49:07.330955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.218 qpair failed and we were unable to recover it. 00:38:06.218 [2024-12-13 03:49:07.331039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.218 [2024-12-13 03:49:07.331054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.218 qpair failed and we were unable to recover it. 00:38:06.218 [2024-12-13 03:49:07.331207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.218 [2024-12-13 03:49:07.331222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.218 qpair failed and we were unable to recover it. 00:38:06.218 [2024-12-13 03:49:07.331295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.218 [2024-12-13 03:49:07.331320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.218 qpair failed and we were unable to recover it. 00:38:06.218 [2024-12-13 03:49:07.331404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.218 [2024-12-13 03:49:07.331421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.218 qpair failed and we were unable to recover it. 00:38:06.218 [2024-12-13 03:49:07.331572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.218 [2024-12-13 03:49:07.331587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.218 qpair failed and we were unable to recover it. 00:38:06.218 [2024-12-13 03:49:07.331660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.218 [2024-12-13 03:49:07.331675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.219 qpair failed and we were unable to recover it. 00:38:06.219 [2024-12-13 03:49:07.331832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.219 [2024-12-13 03:49:07.331847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.219 qpair failed and we were unable to recover it. 00:38:06.219 [2024-12-13 03:49:07.331926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.219 [2024-12-13 03:49:07.331941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.219 qpair failed and we were unable to recover it. 00:38:06.219 [2024-12-13 03:49:07.332025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.219 [2024-12-13 03:49:07.332041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.219 qpair failed and we were unable to recover it. 00:38:06.219 [2024-12-13 03:49:07.332115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.219 [2024-12-13 03:49:07.332129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.219 qpair failed and we were unable to recover it. 00:38:06.219 [2024-12-13 03:49:07.332219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.219 [2024-12-13 03:49:07.332234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.219 qpair failed and we were unable to recover it. 00:38:06.219 [2024-12-13 03:49:07.332316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.219 [2024-12-13 03:49:07.332331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.219 qpair failed and we were unable to recover it. 00:38:06.219 [2024-12-13 03:49:07.332405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.219 [2024-12-13 03:49:07.332420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.219 qpair failed and we were unable to recover it. 00:38:06.219 [2024-12-13 03:49:07.332569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.219 [2024-12-13 03:49:07.332584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.219 qpair failed and we were unable to recover it. 00:38:06.219 [2024-12-13 03:49:07.332665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.219 [2024-12-13 03:49:07.332680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.219 qpair failed and we were unable to recover it. 00:38:06.219 [2024-12-13 03:49:07.332815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.219 [2024-12-13 03:49:07.332831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.219 qpair failed and we were unable to recover it. 00:38:06.219 [2024-12-13 03:49:07.332924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.219 [2024-12-13 03:49:07.332940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.219 qpair failed and we were unable to recover it. 00:38:06.219 [2024-12-13 03:49:07.333106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.219 [2024-12-13 03:49:07.333122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.219 qpair failed and we were unable to recover it. 00:38:06.219 [2024-12-13 03:49:07.333220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.219 [2024-12-13 03:49:07.333236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.219 qpair failed and we were unable to recover it. 00:38:06.219 [2024-12-13 03:49:07.333318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.219 [2024-12-13 03:49:07.333333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.219 qpair failed and we were unable to recover it. 00:38:06.219 [2024-12-13 03:49:07.333440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.219 [2024-12-13 03:49:07.333454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.219 qpair failed and we were unable to recover it. 00:38:06.219 [2024-12-13 03:49:07.333544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.219 [2024-12-13 03:49:07.333561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.219 qpair failed and we were unable to recover it. 00:38:06.219 [2024-12-13 03:49:07.333755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.219 [2024-12-13 03:49:07.333770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.219 qpair failed and we were unable to recover it. 00:38:06.219 [2024-12-13 03:49:07.333868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.219 [2024-12-13 03:49:07.333882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.219 qpair failed and we were unable to recover it. 00:38:06.219 [2024-12-13 03:49:07.334019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.219 [2024-12-13 03:49:07.334035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.219 qpair failed and we were unable to recover it. 00:38:06.219 [2024-12-13 03:49:07.334120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.219 [2024-12-13 03:49:07.334134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.219 qpair failed and we were unable to recover it. 00:38:06.219 [2024-12-13 03:49:07.334269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.219 [2024-12-13 03:49:07.334285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.219 qpair failed and we were unable to recover it. 00:38:06.219 [2024-12-13 03:49:07.334378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.219 [2024-12-13 03:49:07.334393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.219 qpair failed and we were unable to recover it. 00:38:06.219 [2024-12-13 03:49:07.334488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.219 [2024-12-13 03:49:07.334504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.219 qpair failed and we were unable to recover it. 00:38:06.219 [2024-12-13 03:49:07.334574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.219 [2024-12-13 03:49:07.334588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.219 qpair failed and we were unable to recover it. 00:38:06.219 [2024-12-13 03:49:07.334749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.219 [2024-12-13 03:49:07.334775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.219 qpair failed and we were unable to recover it. 00:38:06.219 [2024-12-13 03:49:07.334943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.219 [2024-12-13 03:49:07.334971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.219 qpair failed and we were unable to recover it. 00:38:06.219 [2024-12-13 03:49:07.335144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.219 [2024-12-13 03:49:07.335167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.219 qpair failed and we were unable to recover it. 00:38:06.219 [2024-12-13 03:49:07.335269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.219 [2024-12-13 03:49:07.335291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.219 qpair failed and we were unable to recover it. 00:38:06.219 [2024-12-13 03:49:07.335460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.219 [2024-12-13 03:49:07.335481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.219 qpair failed and we were unable to recover it. 00:38:06.219 [2024-12-13 03:49:07.335747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.219 [2024-12-13 03:49:07.335768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.219 qpair failed and we were unable to recover it. 00:38:06.219 [2024-12-13 03:49:07.335856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.219 [2024-12-13 03:49:07.335873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.219 qpair failed and we were unable to recover it. 00:38:06.219 [2024-12-13 03:49:07.336019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.219 [2024-12-13 03:49:07.336034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.219 qpair failed and we were unable to recover it. 00:38:06.219 [2024-12-13 03:49:07.336116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.219 [2024-12-13 03:49:07.336131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.219 qpair failed and we were unable to recover it. 00:38:06.219 [2024-12-13 03:49:07.336229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.220 [2024-12-13 03:49:07.336244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.220 qpair failed and we were unable to recover it. 00:38:06.220 [2024-12-13 03:49:07.336388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.220 [2024-12-13 03:49:07.336403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.220 qpair failed and we were unable to recover it. 00:38:06.220 [2024-12-13 03:49:07.336620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.220 [2024-12-13 03:49:07.336634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.220 qpair failed and we were unable to recover it. 00:38:06.220 [2024-12-13 03:49:07.336710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.220 [2024-12-13 03:49:07.336724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.220 qpair failed and we were unable to recover it. 00:38:06.220 [2024-12-13 03:49:07.336821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.220 [2024-12-13 03:49:07.336839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.220 qpair failed and we were unable to recover it. 00:38:06.220 [2024-12-13 03:49:07.336915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.220 [2024-12-13 03:49:07.336939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.220 qpair failed and we were unable to recover it. 00:38:06.220 [2024-12-13 03:49:07.337096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.220 [2024-12-13 03:49:07.337112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.220 qpair failed and we were unable to recover it. 00:38:06.220 [2024-12-13 03:49:07.337266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.220 [2024-12-13 03:49:07.337282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.220 qpair failed and we were unable to recover it. 00:38:06.220 [2024-12-13 03:49:07.337434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.220 [2024-12-13 03:49:07.337450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.220 qpair failed and we were unable to recover it. 00:38:06.220 [2024-12-13 03:49:07.337623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.220 [2024-12-13 03:49:07.337640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.220 qpair failed and we were unable to recover it. 00:38:06.220 [2024-12-13 03:49:07.337783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.220 [2024-12-13 03:49:07.337804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.220 qpair failed and we were unable to recover it. 00:38:06.220 [2024-12-13 03:49:07.337996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.220 [2024-12-13 03:49:07.338012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.220 qpair failed and we were unable to recover it. 00:38:06.220 [2024-12-13 03:49:07.338168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.220 [2024-12-13 03:49:07.338184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.220 qpair failed and we were unable to recover it. 00:38:06.220 [2024-12-13 03:49:07.338412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.220 [2024-12-13 03:49:07.338427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.220 qpair failed and we were unable to recover it. 00:38:06.220 [2024-12-13 03:49:07.338581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.220 [2024-12-13 03:49:07.338596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.220 qpair failed and we were unable to recover it. 00:38:06.220 [2024-12-13 03:49:07.338752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.220 [2024-12-13 03:49:07.338768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.220 qpair failed and we were unable to recover it. 00:38:06.220 [2024-12-13 03:49:07.338996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.220 [2024-12-13 03:49:07.339012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.220 qpair failed and we were unable to recover it. 00:38:06.220 [2024-12-13 03:49:07.339158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.220 [2024-12-13 03:49:07.339173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.220 qpair failed and we were unable to recover it. 00:38:06.220 [2024-12-13 03:49:07.339355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.220 [2024-12-13 03:49:07.339370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.220 qpair failed and we were unable to recover it. 00:38:06.220 [2024-12-13 03:49:07.339461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.220 [2024-12-13 03:49:07.339477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.220 qpair failed and we were unable to recover it. 00:38:06.220 [2024-12-13 03:49:07.339703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.220 [2024-12-13 03:49:07.339718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.220 qpair failed and we were unable to recover it. 00:38:06.220 [2024-12-13 03:49:07.339922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.220 [2024-12-13 03:49:07.339937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.220 qpair failed and we were unable to recover it. 00:38:06.220 [2024-12-13 03:49:07.340165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.220 [2024-12-13 03:49:07.340180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.220 qpair failed and we were unable to recover it. 00:38:06.220 [2024-12-13 03:49:07.340344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.220 [2024-12-13 03:49:07.340359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.220 qpair failed and we were unable to recover it. 00:38:06.220 [2024-12-13 03:49:07.340496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.220 [2024-12-13 03:49:07.340511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.220 qpair failed and we were unable to recover it. 00:38:06.220 [2024-12-13 03:49:07.340590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.220 [2024-12-13 03:49:07.340604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.220 qpair failed and we were unable to recover it. 00:38:06.220 [2024-12-13 03:49:07.340744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.220 [2024-12-13 03:49:07.340759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.220 qpair failed and we were unable to recover it. 00:38:06.220 [2024-12-13 03:49:07.340856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.220 [2024-12-13 03:49:07.340871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.220 qpair failed and we were unable to recover it. 00:38:06.220 [2024-12-13 03:49:07.341102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.220 [2024-12-13 03:49:07.341118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.220 qpair failed and we were unable to recover it. 00:38:06.220 [2024-12-13 03:49:07.341218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.220 [2024-12-13 03:49:07.341233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.220 qpair failed and we were unable to recover it. 00:38:06.220 [2024-12-13 03:49:07.341373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.221 [2024-12-13 03:49:07.341388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.221 qpair failed and we were unable to recover it. 00:38:06.221 [2024-12-13 03:49:07.341563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.221 [2024-12-13 03:49:07.341578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.221 qpair failed and we were unable to recover it. 00:38:06.221 [2024-12-13 03:49:07.341651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.221 [2024-12-13 03:49:07.341666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.221 qpair failed and we were unable to recover it. 00:38:06.221 [2024-12-13 03:49:07.341776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.221 [2024-12-13 03:49:07.341791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.221 qpair failed and we were unable to recover it. 00:38:06.221 [2024-12-13 03:49:07.341995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.221 [2024-12-13 03:49:07.342012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.221 qpair failed and we were unable to recover it. 00:38:06.221 [2024-12-13 03:49:07.342106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.221 [2024-12-13 03:49:07.342120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.221 qpair failed and we were unable to recover it. 00:38:06.221 [2024-12-13 03:49:07.342217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.221 [2024-12-13 03:49:07.342231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.221 qpair failed and we were unable to recover it. 00:38:06.221 [2024-12-13 03:49:07.342431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.221 [2024-12-13 03:49:07.342446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.221 qpair failed and we were unable to recover it. 00:38:06.221 [2024-12-13 03:49:07.342649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.221 [2024-12-13 03:49:07.342665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.221 qpair failed and we were unable to recover it. 00:38:06.221 [2024-12-13 03:49:07.342858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.221 [2024-12-13 03:49:07.342872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.221 qpair failed and we were unable to recover it. 00:38:06.221 [2024-12-13 03:49:07.343098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.221 [2024-12-13 03:49:07.343113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.221 qpair failed and we were unable to recover it. 00:38:06.221 [2024-12-13 03:49:07.343242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.221 [2024-12-13 03:49:07.343257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.221 qpair failed and we were unable to recover it. 00:38:06.221 [2024-12-13 03:49:07.343410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.221 [2024-12-13 03:49:07.343425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.221 qpair failed and we were unable to recover it. 00:38:06.221 [2024-12-13 03:49:07.343672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.221 [2024-12-13 03:49:07.343687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.221 qpair failed and we were unable to recover it. 00:38:06.221 [2024-12-13 03:49:07.343856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.221 [2024-12-13 03:49:07.343874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.221 qpair failed and we were unable to recover it. 00:38:06.221 [2024-12-13 03:49:07.344062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.221 [2024-12-13 03:49:07.344077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.221 qpair failed and we were unable to recover it. 00:38:06.221 [2024-12-13 03:49:07.344304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.221 [2024-12-13 03:49:07.344318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.221 qpair failed and we were unable to recover it. 00:38:06.221 [2024-12-13 03:49:07.344606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.221 [2024-12-13 03:49:07.344622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.221 qpair failed and we were unable to recover it. 00:38:06.221 [2024-12-13 03:49:07.344850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.221 [2024-12-13 03:49:07.344865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.221 qpair failed and we were unable to recover it. 00:38:06.221 [2024-12-13 03:49:07.345119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.221 [2024-12-13 03:49:07.345134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.221 qpair failed and we were unable to recover it. 00:38:06.221 [2024-12-13 03:49:07.345278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.221 [2024-12-13 03:49:07.345293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.221 qpair failed and we were unable to recover it. 00:38:06.221 [2024-12-13 03:49:07.345500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.221 [2024-12-13 03:49:07.345516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.221 qpair failed and we were unable to recover it. 00:38:06.221 [2024-12-13 03:49:07.345754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.221 [2024-12-13 03:49:07.345772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.221 qpair failed and we were unable to recover it. 00:38:06.221 [2024-12-13 03:49:07.345942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.221 [2024-12-13 03:49:07.345958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.221 qpair failed and we were unable to recover it. 00:38:06.221 [2024-12-13 03:49:07.346215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.221 [2024-12-13 03:49:07.346231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.221 qpair failed and we were unable to recover it. 00:38:06.221 [2024-12-13 03:49:07.346389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.221 [2024-12-13 03:49:07.346405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.221 qpair failed and we were unable to recover it. 00:38:06.221 [2024-12-13 03:49:07.346580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.221 [2024-12-13 03:49:07.346594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.221 qpair failed and we were unable to recover it. 00:38:06.221 [2024-12-13 03:49:07.346800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.221 [2024-12-13 03:49:07.346815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.221 qpair failed and we were unable to recover it. 00:38:06.221 [2024-12-13 03:49:07.346912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.221 [2024-12-13 03:49:07.346951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.221 qpair failed and we were unable to recover it. 00:38:06.221 [2024-12-13 03:49:07.347127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.221 [2024-12-13 03:49:07.347142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.221 qpair failed and we were unable to recover it. 00:38:06.221 [2024-12-13 03:49:07.347366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.221 [2024-12-13 03:49:07.347382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.221 qpair failed and we were unable to recover it. 00:38:06.221 [2024-12-13 03:49:07.347534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.221 [2024-12-13 03:49:07.347548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.221 qpair failed and we were unable to recover it. 00:38:06.221 [2024-12-13 03:49:07.347782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.221 [2024-12-13 03:49:07.347798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.221 qpair failed and we were unable to recover it. 00:38:06.221 [2024-12-13 03:49:07.347883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.221 [2024-12-13 03:49:07.347898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.221 qpair failed and we were unable to recover it. 00:38:06.221 [2024-12-13 03:49:07.348074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.222 [2024-12-13 03:49:07.348090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.222 qpair failed and we were unable to recover it. 00:38:06.222 [2024-12-13 03:49:07.348227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.222 [2024-12-13 03:49:07.348242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.222 qpair failed and we were unable to recover it. 00:38:06.222 [2024-12-13 03:49:07.348526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.222 [2024-12-13 03:49:07.348541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.222 qpair failed and we were unable to recover it. 00:38:06.222 [2024-12-13 03:49:07.348726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.222 [2024-12-13 03:49:07.348741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.222 qpair failed and we were unable to recover it. 00:38:06.222 [2024-12-13 03:49:07.348962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.222 [2024-12-13 03:49:07.348978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.222 qpair failed and we were unable to recover it. 00:38:06.222 [2024-12-13 03:49:07.349202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.222 [2024-12-13 03:49:07.349217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.222 qpair failed and we were unable to recover it. 00:38:06.222 [2024-12-13 03:49:07.349511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.222 [2024-12-13 03:49:07.349538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.222 qpair failed and we were unable to recover it. 00:38:06.222 [2024-12-13 03:49:07.349627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.222 [2024-12-13 03:49:07.349645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.222 qpair failed and we were unable to recover it. 00:38:06.222 [2024-12-13 03:49:07.349782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.222 [2024-12-13 03:49:07.349797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.222 qpair failed and we were unable to recover it. 00:38:06.222 [2024-12-13 03:49:07.350031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.222 [2024-12-13 03:49:07.350050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.222 qpair failed and we were unable to recover it. 00:38:06.222 [2024-12-13 03:49:07.350190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.222 [2024-12-13 03:49:07.350210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.222 qpair failed and we were unable to recover it. 00:38:06.222 [2024-12-13 03:49:07.350355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.222 [2024-12-13 03:49:07.350370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.222 qpair failed and we were unable to recover it. 00:38:06.222 [2024-12-13 03:49:07.350526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.222 [2024-12-13 03:49:07.350541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.222 qpair failed and we were unable to recover it. 00:38:06.222 [2024-12-13 03:49:07.350719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.222 [2024-12-13 03:49:07.350734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.222 qpair failed and we were unable to recover it. 00:38:06.222 [2024-12-13 03:49:07.350962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.222 [2024-12-13 03:49:07.350978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.222 qpair failed and we were unable to recover it. 00:38:06.222 [2024-12-13 03:49:07.351090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.222 [2024-12-13 03:49:07.351105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.222 qpair failed and we were unable to recover it. 00:38:06.222 [2024-12-13 03:49:07.351281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.222 [2024-12-13 03:49:07.351296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.222 qpair failed and we were unable to recover it. 00:38:06.222 [2024-12-13 03:49:07.351386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.222 [2024-12-13 03:49:07.351401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.222 qpair failed and we were unable to recover it. 00:38:06.222 [2024-12-13 03:49:07.351551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.222 [2024-12-13 03:49:07.351565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.222 qpair failed and we were unable to recover it. 00:38:06.222 [2024-12-13 03:49:07.351788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.222 [2024-12-13 03:49:07.351803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.222 qpair failed and we were unable to recover it. 00:38:06.222 [2024-12-13 03:49:07.352029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.222 [2024-12-13 03:49:07.352044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.222 qpair failed and we were unable to recover it. 00:38:06.222 [2024-12-13 03:49:07.352260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.222 [2024-12-13 03:49:07.352275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.222 qpair failed and we were unable to recover it. 00:38:06.222 [2024-12-13 03:49:07.352423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.222 [2024-12-13 03:49:07.352438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.222 qpair failed and we were unable to recover it. 00:38:06.222 [2024-12-13 03:49:07.352650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.222 [2024-12-13 03:49:07.352666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.222 qpair failed and we were unable to recover it. 00:38:06.222 [2024-12-13 03:49:07.352884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.222 [2024-12-13 03:49:07.352899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.222 qpair failed and we were unable to recover it. 00:38:06.222 [2024-12-13 03:49:07.353056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.222 [2024-12-13 03:49:07.353071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.222 qpair failed and we were unable to recover it. 00:38:06.222 [2024-12-13 03:49:07.353397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.222 [2024-12-13 03:49:07.353420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.222 qpair failed and we were unable to recover it. 00:38:06.222 [2024-12-13 03:49:07.353633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.222 [2024-12-13 03:49:07.353648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.222 qpair failed and we were unable to recover it. 00:38:06.222 [2024-12-13 03:49:07.353745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.222 [2024-12-13 03:49:07.353760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.222 qpair failed and we were unable to recover it. 00:38:06.222 [2024-12-13 03:49:07.353877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.222 [2024-12-13 03:49:07.353892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.222 qpair failed and we were unable to recover it. 00:38:06.223 [2024-12-13 03:49:07.354116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.223 [2024-12-13 03:49:07.354133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.223 qpair failed and we were unable to recover it. 00:38:06.223 [2024-12-13 03:49:07.354223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.223 [2024-12-13 03:49:07.354237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.223 qpair failed and we were unable to recover it. 00:38:06.223 [2024-12-13 03:49:07.354390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.223 [2024-12-13 03:49:07.354405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.223 qpair failed and we were unable to recover it. 00:38:06.223 [2024-12-13 03:49:07.354558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.223 [2024-12-13 03:49:07.354572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.223 qpair failed and we were unable to recover it. 00:38:06.223 [2024-12-13 03:49:07.354687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.223 [2024-12-13 03:49:07.354702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.223 qpair failed and we were unable to recover it. 00:38:06.223 [2024-12-13 03:49:07.354795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.223 [2024-12-13 03:49:07.354810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.223 qpair failed and we were unable to recover it. 00:38:06.223 [2024-12-13 03:49:07.355038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.223 [2024-12-13 03:49:07.355054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.223 qpair failed and we were unable to recover it. 00:38:06.223 [2024-12-13 03:49:07.355254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.223 [2024-12-13 03:49:07.355270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.223 qpair failed and we were unable to recover it. 00:38:06.223 [2024-12-13 03:49:07.355421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.223 [2024-12-13 03:49:07.355437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.223 qpair failed and we were unable to recover it. 00:38:06.223 [2024-12-13 03:49:07.355675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.223 [2024-12-13 03:49:07.355691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.223 qpair failed and we were unable to recover it. 00:38:06.223 [2024-12-13 03:49:07.355855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.223 [2024-12-13 03:49:07.355871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.223 qpair failed and we were unable to recover it. 00:38:06.223 [2024-12-13 03:49:07.356030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.223 [2024-12-13 03:49:07.356046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.223 qpair failed and we were unable to recover it. 00:38:06.223 [2024-12-13 03:49:07.356147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.223 [2024-12-13 03:49:07.356162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.223 qpair failed and we were unable to recover it. 00:38:06.223 [2024-12-13 03:49:07.356368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.223 [2024-12-13 03:49:07.356384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.223 qpair failed and we were unable to recover it. 00:38:06.223 [2024-12-13 03:49:07.356573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.223 [2024-12-13 03:49:07.356588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.223 qpair failed and we were unable to recover it. 00:38:06.223 [2024-12-13 03:49:07.356734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.223 [2024-12-13 03:49:07.356749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.223 qpair failed and we were unable to recover it. 00:38:06.223 [2024-12-13 03:49:07.356849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.223 [2024-12-13 03:49:07.356865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.223 qpair failed and we were unable to recover it. 00:38:06.223 [2024-12-13 03:49:07.356956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.223 [2024-12-13 03:49:07.356973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.223 qpair failed and we were unable to recover it. 00:38:06.223 [2024-12-13 03:49:07.357161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.223 [2024-12-13 03:49:07.357176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.223 qpair failed and we were unable to recover it. 00:38:06.223 [2024-12-13 03:49:07.357379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.223 [2024-12-13 03:49:07.357395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.223 qpair failed and we were unable to recover it. 00:38:06.223 [2024-12-13 03:49:07.357661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.223 [2024-12-13 03:49:07.357676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.223 qpair failed and we were unable to recover it. 00:38:06.223 [2024-12-13 03:49:07.357796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.223 [2024-12-13 03:49:07.357812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.223 qpair failed and we were unable to recover it. 00:38:06.223 [2024-12-13 03:49:07.358035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.223 [2024-12-13 03:49:07.358051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.223 qpair failed and we were unable to recover it. 00:38:06.223 [2024-12-13 03:49:07.358220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.223 [2024-12-13 03:49:07.358236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.223 qpair failed and we were unable to recover it. 00:38:06.223 [2024-12-13 03:49:07.358390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.223 [2024-12-13 03:49:07.358405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.223 qpair failed and we were unable to recover it. 00:38:06.505 [2024-12-13 03:49:07.358554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.505 [2024-12-13 03:49:07.358569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.505 qpair failed and we were unable to recover it. 00:38:06.505 [2024-12-13 03:49:07.358722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.505 [2024-12-13 03:49:07.358739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.505 qpair failed and we were unable to recover it. 00:38:06.505 [2024-12-13 03:49:07.358928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.505 [2024-12-13 03:49:07.358944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.505 qpair failed and we were unable to recover it. 00:38:06.505 [2024-12-13 03:49:07.359073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.505 [2024-12-13 03:49:07.359089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.505 qpair failed and we were unable to recover it. 00:38:06.505 [2024-12-13 03:49:07.359248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.505 [2024-12-13 03:49:07.359262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.505 qpair failed and we were unable to recover it. 00:38:06.505 [2024-12-13 03:49:07.359414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.505 [2024-12-13 03:49:07.359429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.505 qpair failed and we were unable to recover it. 00:38:06.505 [2024-12-13 03:49:07.359655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.505 [2024-12-13 03:49:07.359669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.505 qpair failed and we were unable to recover it. 00:38:06.505 [2024-12-13 03:49:07.359773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.505 [2024-12-13 03:49:07.359788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.505 qpair failed and we were unable to recover it. 00:38:06.505 [2024-12-13 03:49:07.360033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.505 [2024-12-13 03:49:07.360048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.505 qpair failed and we were unable to recover it. 00:38:06.505 [2024-12-13 03:49:07.360124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.505 [2024-12-13 03:49:07.360138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.505 qpair failed and we were unable to recover it. 00:38:06.505 [2024-12-13 03:49:07.360284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.505 [2024-12-13 03:49:07.360298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.505 qpair failed and we were unable to recover it. 00:38:06.505 [2024-12-13 03:49:07.360435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.505 [2024-12-13 03:49:07.360450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.505 qpair failed and we were unable to recover it. 00:38:06.505 [2024-12-13 03:49:07.360711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.505 [2024-12-13 03:49:07.360726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.505 qpair failed and we were unable to recover it. 00:38:06.505 [2024-12-13 03:49:07.360881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.505 [2024-12-13 03:49:07.360895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.505 qpair failed and we were unable to recover it. 00:38:06.505 [2024-12-13 03:49:07.361058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.361072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.361210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.361225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.361312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.361324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.361471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.361486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.361718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.361733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.361893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.361923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.362073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.362087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.362227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.362242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.362466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.362481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.362694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.362709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.362847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.362863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.363091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.363106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.363322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.363336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.363436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.363450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.363584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.363598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.363743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.363757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.363904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.363930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.364072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.364085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.364222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.364239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.364451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.364464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.364634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.364649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.364856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.364869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.365079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.365095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.365325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.365339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.365536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.365550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.365708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.365722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.365883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.365897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.366073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.366087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.366184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.366199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.366331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.366345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.366545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.366560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.366783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.366798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.367003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.367019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.367224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.367238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.367463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.367477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.367626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.367641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.367869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.367883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.368044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.368071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.368314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.368328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.368482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.368497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.368651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.368666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.368889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.368904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.369205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.369243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.369372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.369403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.506 qpair failed and we were unable to recover it. 00:38:06.506 [2024-12-13 03:49:07.369558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.506 [2024-12-13 03:49:07.369582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.369834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.369857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.370095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.370118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.370285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.370307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.370484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.370501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.370657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.370671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.370914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.370968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.371162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.371204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.371819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.371863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.372163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.372205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.372421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.372463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.372764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.372806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.373090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.373135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.373340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.373380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.373632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.373649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.373851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.373864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.374059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.374076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.374313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.374355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.374506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.374551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.374757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.374798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.374941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.374985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.375113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.375155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.375431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.375473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.375751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.375795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.376045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.376088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.376220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.376234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.376343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.376357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.376503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.376518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.376609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.376622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.376782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.376797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.377025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.377040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.377283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.377303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.377386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.377406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.377660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.377675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.377909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.377932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.378018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.378032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.378248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.378262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.378353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.378366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.378555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.378569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.378735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.378749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.378974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.378989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.379093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.379108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.379309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.379324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.379412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.379425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.379671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.379686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.507 [2024-12-13 03:49:07.379903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.507 [2024-12-13 03:49:07.379922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.507 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.380132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.380147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.380250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.380266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.380438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.380452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.380539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.380552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.380714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.380728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.380947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.380963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.381136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.381150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.381280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.381294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.381386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.381403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.381546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.381560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.381646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.381659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.381751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.381765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.382026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.382041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.382147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.382161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.382398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.382413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.382583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.382597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.382772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.382786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.382937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.382952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.383059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.383076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.383165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.383178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.383264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.383277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.383416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.383431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.383514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.383527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.383677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.383692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.383944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.383987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.384136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.384177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.384406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.384453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.384705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.384720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.384854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.384868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.384965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.384977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.385128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.385143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.385337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.385378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.385660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.385701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.385875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.385928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.386074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.386117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.386328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.386371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.386630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.386644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.386869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.386882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.386960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.386972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.387116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.387130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.387278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.387291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.387361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.387374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.387523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.387536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.387806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.508 [2024-12-13 03:49:07.387818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.508 qpair failed and we were unable to recover it. 00:38:06.508 [2024-12-13 03:49:07.387965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.387979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.388142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.388155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.388318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.388331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.388467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.388481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.388664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.388682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.388816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.388834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.388988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.389001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.389216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.389229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.389312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.389325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.389424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.389437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.389588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.389603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.389849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.389890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.390051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.390093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.390314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.390356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.390551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.390564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.390772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.390786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.391035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.391049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.391210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.391224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.391414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.391462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.391746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.391786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.392053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.392096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.392253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.392294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.392434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.392447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.392685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.392698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.392836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.392848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.393077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.393092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.393241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.393255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.393389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.393403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.393502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.393514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.393682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.393696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.393936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.393950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.394066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.394081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.394187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.394203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.394378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.394392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.394557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.394571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.394737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.394751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.394832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.394844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.394983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.509 [2024-12-13 03:49:07.394996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.509 qpair failed and we were unable to recover it. 00:38:06.509 [2024-12-13 03:49:07.395147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.395161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.395312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.395326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.395491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.395505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.395715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.395758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.395908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.395976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.396195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.396236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.396441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.396458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.396636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.396649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.396881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.396895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.397047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.397061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.397204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.397219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.397352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.397366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.397533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.397545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.397629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.397642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.397722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.397734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.397887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.397901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.398063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.398076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.398292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.398334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.398613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.398654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.398949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.398992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.399138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.399182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.399391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.399432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.399656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.399670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.399805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.399818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.399910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.399927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.400075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.400088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.400173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.400187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.400339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.400352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.400434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.400446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.400693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.400708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.400869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.400883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.401039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.401106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.401265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.401305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.401518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.401561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.401841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.401855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.402007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.402021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.402157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.402170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.402315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.402329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.402482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.402496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.402690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.402732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.402996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.403041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.403313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.403327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.403460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.403475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.403709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.403750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.403973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.510 [2024-12-13 03:49:07.404016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.510 qpair failed and we were unable to recover it. 00:38:06.510 [2024-12-13 03:49:07.404227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.404269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.404413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.404462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.404719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.404760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.405065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.405108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.405245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.405288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.405499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.405512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.405674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.405687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.405849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.405863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.406073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.406088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.406222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.406235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.406341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.406354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.406492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.406507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.406667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.406681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.406826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.406840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.406987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.407002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.407210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.407224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.407377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.407390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.407479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.407491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.407579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.407591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.407751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.407765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.407961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.407974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.408112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.408125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.408231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.408244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.408311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.408323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.408468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.408481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.408650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.408664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.408855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.408869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.409094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.409110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.409247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.409260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.409350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.409363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.409519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.409532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.409693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.409706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.409794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.409807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.409972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.409987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.410123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.410136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.410309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.410322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.410458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.410472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.410682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.410696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.410846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.410859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.410957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.410972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.411125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.411139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.411307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.411324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.511 [2024-12-13 03:49:07.411416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.511 [2024-12-13 03:49:07.411429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.511 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.411618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.411659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.411857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.411897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.412039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.412081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.412336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.412349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.412503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.412517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.412687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.412700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.412903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.412930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.413135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.413156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.413258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.413271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.413438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.413452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.413714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.413728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.413933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.413947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.414024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.414036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.414191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.414204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.414361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.414374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.414598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.414642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.414866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.414909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.415067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.415110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.415254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.415269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.415422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.415437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.415524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.415538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.415766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.415779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.415938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.415952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.416032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.416045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.416201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.416214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.416425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.416486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.416679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.416738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.417040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.417088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.417269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.417314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.417473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.417515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.417788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.417810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.418039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.418054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.418156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.418169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.418266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.418279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.418365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.418378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.418608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.418622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.418714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.418727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.418892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.418931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.419137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.419185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.419318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.419359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.419598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.419610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.419811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.419824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.420058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.420072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.420238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.512 [2024-12-13 03:49:07.420273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.512 qpair failed and we were unable to recover it. 00:38:06.512 [2024-12-13 03:49:07.420422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.513 [2024-12-13 03:49:07.420465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.513 qpair failed and we were unable to recover it. 00:38:06.513 [2024-12-13 03:49:07.420596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.513 [2024-12-13 03:49:07.420637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.513 qpair failed and we were unable to recover it. 00:38:06.513 [2024-12-13 03:49:07.420968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.513 [2024-12-13 03:49:07.421011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.513 qpair failed and we were unable to recover it. 00:38:06.513 [2024-12-13 03:49:07.421188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.513 [2024-12-13 03:49:07.421231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.513 qpair failed and we were unable to recover it. 00:38:06.513 [2024-12-13 03:49:07.421423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.513 [2024-12-13 03:49:07.421436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.513 qpair failed and we were unable to recover it. 00:38:06.513 [2024-12-13 03:49:07.421641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.513 [2024-12-13 03:49:07.421684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.513 qpair failed and we were unable to recover it. 00:38:06.513 [2024-12-13 03:49:07.421966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.513 [2024-12-13 03:49:07.422010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.513 qpair failed and we were unable to recover it. 00:38:06.513 [2024-12-13 03:49:07.422171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.513 [2024-12-13 03:49:07.422214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.513 qpair failed and we were unable to recover it. 00:38:06.513 [2024-12-13 03:49:07.422453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.513 [2024-12-13 03:49:07.422468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.513 qpair failed and we were unable to recover it. 00:38:06.513 [2024-12-13 03:49:07.422645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.513 [2024-12-13 03:49:07.422658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.513 qpair failed and we were unable to recover it. 00:38:06.513 [2024-12-13 03:49:07.422888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.513 [2024-12-13 03:49:07.422942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.513 qpair failed and we were unable to recover it. 00:38:06.513 [2024-12-13 03:49:07.423104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.513 [2024-12-13 03:49:07.423146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.513 qpair failed and we were unable to recover it. 00:38:06.513 [2024-12-13 03:49:07.423434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.513 [2024-12-13 03:49:07.423476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.513 qpair failed and we were unable to recover it. 00:38:06.513 [2024-12-13 03:49:07.423629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.513 [2024-12-13 03:49:07.423671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.513 qpair failed and we were unable to recover it. 00:38:06.513 [2024-12-13 03:49:07.423857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.513 [2024-12-13 03:49:07.423898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.513 qpair failed and we were unable to recover it. 00:38:06.513 [2024-12-13 03:49:07.424123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.513 [2024-12-13 03:49:07.424167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.513 qpair failed and we were unable to recover it. 00:38:06.513 [2024-12-13 03:49:07.424377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.513 [2024-12-13 03:49:07.424418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.513 qpair failed and we were unable to recover it. 00:38:06.513 [2024-12-13 03:49:07.424692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.513 [2024-12-13 03:49:07.424705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.513 qpair failed and we were unable to recover it. 00:38:06.513 [2024-12-13 03:49:07.424993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.513 [2024-12-13 03:49:07.425006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.513 qpair failed and we were unable to recover it. 00:38:06.513 [2024-12-13 03:49:07.425102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.513 [2024-12-13 03:49:07.425115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.513 qpair failed and we were unable to recover it. 00:38:06.513 [2024-12-13 03:49:07.425264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.513 [2024-12-13 03:49:07.425277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.513 qpair failed and we were unable to recover it. 00:38:06.513 [2024-12-13 03:49:07.425367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.513 [2024-12-13 03:49:07.425381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.513 qpair failed and we were unable to recover it. 00:38:06.513 [2024-12-13 03:49:07.425542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.513 [2024-12-13 03:49:07.425555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.513 qpair failed and we were unable to recover it. 00:38:06.513 [2024-12-13 03:49:07.425701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.513 [2024-12-13 03:49:07.425715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.513 qpair failed and we were unable to recover it. 00:38:06.513 [2024-12-13 03:49:07.425943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.513 [2024-12-13 03:49:07.425985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.513 qpair failed and we were unable to recover it. 00:38:06.513 [2024-12-13 03:49:07.426186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.513 [2024-12-13 03:49:07.426228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.513 qpair failed and we were unable to recover it. 00:38:06.513 [2024-12-13 03:49:07.426451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.513 [2024-12-13 03:49:07.426493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.513 qpair failed and we were unable to recover it. 00:38:06.513 [2024-12-13 03:49:07.426691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.513 [2024-12-13 03:49:07.426705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.513 qpair failed and we were unable to recover it. 00:38:06.513 [2024-12-13 03:49:07.426855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.513 [2024-12-13 03:49:07.426868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.513 qpair failed and we were unable to recover it. 00:38:06.513 [2024-12-13 03:49:07.427030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.513 [2024-12-13 03:49:07.427044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.513 qpair failed and we were unable to recover it. 00:38:06.513 [2024-12-13 03:49:07.427200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.513 [2024-12-13 03:49:07.427213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.513 qpair failed and we were unable to recover it. 00:38:06.513 [2024-12-13 03:49:07.427410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.513 [2024-12-13 03:49:07.427423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.513 qpair failed and we were unable to recover it. 00:38:06.513 [2024-12-13 03:49:07.427648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.513 [2024-12-13 03:49:07.427694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.513 qpair failed and we were unable to recover it. 00:38:06.513 [2024-12-13 03:49:07.427966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.513 [2024-12-13 03:49:07.428020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.513 qpair failed and we were unable to recover it. 00:38:06.513 [2024-12-13 03:49:07.428236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.513 [2024-12-13 03:49:07.428285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.513 qpair failed and we were unable to recover it. 00:38:06.513 [2024-12-13 03:49:07.428437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.513 [2024-12-13 03:49:07.428480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.513 qpair failed and we were unable to recover it. 00:38:06.513 [2024-12-13 03:49:07.428734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.513 [2024-12-13 03:49:07.428748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.429019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.429033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.429193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.429206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.429409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.429422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.429507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.429519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.429728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.429741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.429985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.429999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.430151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.430165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.430319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.430332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.430531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.430544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.430712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.430726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.430892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.430953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.431125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.431167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.431476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.431519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.431806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.431849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.432108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.432150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.432436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.432450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.432764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.432778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.432949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.432963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.433189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.433203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.433301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.433314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.433418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.433432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.433581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.433594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.433817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.433831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.433928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.433941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.434053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.434067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.434229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.434242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.434342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.434355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.434440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.434452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.434635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.434649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.434914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.434969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.435106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.435146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.435378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.435418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.435629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.435670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.435874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.435915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.436072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.436113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.436321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.436362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.436575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.436616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.436847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.436896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.437109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.437151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.437353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.437366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.437435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.437448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.437665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.437679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.437869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.437882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.438096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.438110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.514 [2024-12-13 03:49:07.438201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.514 [2024-12-13 03:49:07.438213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.514 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.438401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.438414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.438641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.438655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.438803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.438816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.439032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.439046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.439191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.439204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.439306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.439319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.439419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.439432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.439522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.439534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.439712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.439725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.439818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.439831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.439914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.439932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.440075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.440088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.440263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.440276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.440367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.440379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.440480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.440495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.440641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.440660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.440798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.440811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.440913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.440930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.441157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.441170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.441372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.441386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.441611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.441653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.441853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.441893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.442192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.442234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.442356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.442369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.442597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.442610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.442693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.442706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.442857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.442871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.443069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.443083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.443187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.443201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.443347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.443360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.443518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.443531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.443840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.443881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.444156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.444211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.444378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.444418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.444703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.444716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.444865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.444879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.445042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.445056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.445215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.445229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.445379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.445393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.445493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.445507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.445706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.445720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.445926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.445939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.515 [2024-12-13 03:49:07.446080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.515 [2024-12-13 03:49:07.446093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.515 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.446328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.446368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.446585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.446627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.446958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.447002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.447337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.447380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.447633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.447675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.447940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.447984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.448257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.448297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.448496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.448509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.448676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.448690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.448888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.448938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.449212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.449254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.449540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.449581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.449853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.449893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.450103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.450146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.450346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.450396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.450615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.450656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.450942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.450987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.451201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.451243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.451445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.451470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.451683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.451696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.451868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.451881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.452026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.452040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.452242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.452285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.452439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.452481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.452738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.452780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.453066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.453108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.453270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.453320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.453393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.453405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.453597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.453611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.453743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.453756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.453931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.453945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.454021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.454033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.454135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.454147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.454361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.454374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.454538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.454582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.454826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.454871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.456117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.456150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.456317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.456331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.456551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.456565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.456767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.456780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.456928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.456952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.457113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.457151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.457429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.457471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.457685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.457727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.516 [2024-12-13 03:49:07.457946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.516 [2024-12-13 03:49:07.457991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.516 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.458230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.458275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.458449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.458463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.458641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.458682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.458890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.458956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.459268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.459309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.459619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.459659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.459913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.459931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.460161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.460174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.460343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.460356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.460506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.460542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.460799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.460840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.461139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.461190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.461470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.461511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.461765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.461807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.462098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.462142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.462407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.462420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.462555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.462568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.462768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.462781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.462955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.462969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.463201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.463243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.463476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.463517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.463821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.463864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.464166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.464208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.464468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.464481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.464776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.464789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.465042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.465055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.465279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.465292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.465467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.465480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.465582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.465596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.465818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.465832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.466005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.466019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.466121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.466137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.466294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.466308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.466442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.466455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.466553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.466566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.466708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.466721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.466865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.466878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.467082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.467096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.517 [2024-12-13 03:49:07.467242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.517 [2024-12-13 03:49:07.467255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.517 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.467397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.467411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.467598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.467611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.467699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.467711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.467938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.467952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.468179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.468192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.468302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.468314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.468484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.468497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.468739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.468752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.468927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.468940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.469139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.469152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.469404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.469418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.469657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.469698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.469851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.469899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.470067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.470108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.470305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.470348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.470596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.470642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.470776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.470795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.471061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.471103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.471388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.471430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.471754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.471768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.471937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.471980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.472239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.472280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.472477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.472490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.472572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.472585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.472811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.472825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.473041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.473054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.473165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.473179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.473400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.473414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.473577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.473593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.473811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.473824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.474036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.474050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.474216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.474230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.474410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.474423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.474649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.474663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.474898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.474912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.475083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.475098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.475234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.475248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.475341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.475353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.475551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.475564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.475770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.475784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.475941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.475956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.476061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.476074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.476180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.476193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.476403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.518 [2024-12-13 03:49:07.476416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.518 qpair failed and we were unable to recover it. 00:38:06.518 [2024-12-13 03:49:07.476629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.476643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.476811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.476825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.476977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.476991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.477155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.477169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.477368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.477381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.477535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.477548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.477789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.477831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.478048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.478090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.478235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.478284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.478555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.478568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.478729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.478743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.478940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.478983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.479191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.479232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.479461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.479505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.479702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.479718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.479874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.479888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.480000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.480014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.480116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.480130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.480288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.480301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.480512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.480525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.480639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.480653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.480736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.480748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.480829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.480841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.480988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.481002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.481171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.481213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.481456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.481497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.481800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.481842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.482151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.482193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.482450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.482492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.482650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.482691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.482891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.482961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.483171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.483213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.483408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.483461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.483628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.483641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.483846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.483886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.484064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.484109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.484340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.484393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.484551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.484592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.484808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.484850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.485029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.485072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.485233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.485275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.485552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.485593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.485851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.485892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.519 [2024-12-13 03:49:07.486090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.519 [2024-12-13 03:49:07.486133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.519 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.486378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.486420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.486566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.486607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.486755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.486768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.486990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.487004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.487148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.487201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.487455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.487496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.487687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.487737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.487885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.487898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.488075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.488089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.488291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.488304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.488466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.488506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.488720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.488761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.488973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.489016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.489170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.489211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.489508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.489521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.489720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.489733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.489935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.489948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.490115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.490157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.490369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.490411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.490671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.490717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.490938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.490952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.491156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.491169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.491329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.491371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.491605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.491646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.491966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.492007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.492265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.492306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.492465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.492506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.492736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.492776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.492997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.493011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.493158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.493172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.493305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.493318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.493407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.493419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.493549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.493562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.493723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.493737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.493888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.493901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.494115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.494129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.494278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.494292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.494516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.494529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.494691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.494704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.494805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.494819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.494958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.494972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.495160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.495174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.495325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.495338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.495493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.520 [2024-12-13 03:49:07.495507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.520 qpair failed and we were unable to recover it. 00:38:06.520 [2024-12-13 03:49:07.495773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.521 [2024-12-13 03:49:07.495820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.521 qpair failed and we were unable to recover it. 00:38:06.521 [2024-12-13 03:49:07.496051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.521 [2024-12-13 03:49:07.496093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.521 qpair failed and we were unable to recover it. 00:38:06.521 [2024-12-13 03:49:07.496223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.521 [2024-12-13 03:49:07.496265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.521 qpair failed and we were unable to recover it. 00:38:06.521 [2024-12-13 03:49:07.496371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.521 [2024-12-13 03:49:07.496384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.521 qpair failed and we were unable to recover it. 00:38:06.521 [2024-12-13 03:49:07.496572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.521 [2024-12-13 03:49:07.496585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.521 qpair failed and we were unable to recover it. 00:38:06.521 [2024-12-13 03:49:07.496809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.521 [2024-12-13 03:49:07.496823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.521 qpair failed and we were unable to recover it. 00:38:06.521 [2024-12-13 03:49:07.496906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.521 [2024-12-13 03:49:07.496924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.521 qpair failed and we were unable to recover it. 00:38:06.521 [2024-12-13 03:49:07.497153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.521 [2024-12-13 03:49:07.497166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.521 qpair failed and we were unable to recover it. 00:38:06.521 [2024-12-13 03:49:07.497259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.521 [2024-12-13 03:49:07.497272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.521 qpair failed and we were unable to recover it. 00:38:06.521 [2024-12-13 03:49:07.497427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.521 [2024-12-13 03:49:07.497440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.521 qpair failed and we were unable to recover it. 00:38:06.521 [2024-12-13 03:49:07.497534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.521 [2024-12-13 03:49:07.497547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.521 qpair failed and we were unable to recover it. 00:38:06.521 [2024-12-13 03:49:07.497665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.521 [2024-12-13 03:49:07.497682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.521 qpair failed and we were unable to recover it. 00:38:06.521 [2024-12-13 03:49:07.497910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.521 [2024-12-13 03:49:07.497934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.521 qpair failed and we were unable to recover it. 00:38:06.521 [2024-12-13 03:49:07.498083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.521 [2024-12-13 03:49:07.498096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.521 qpair failed and we were unable to recover it. 00:38:06.521 [2024-12-13 03:49:07.498243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.521 [2024-12-13 03:49:07.498256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.521 qpair failed and we were unable to recover it. 00:38:06.521 [2024-12-13 03:49:07.498461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.521 [2024-12-13 03:49:07.498473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.521 qpair failed and we were unable to recover it. 00:38:06.521 [2024-12-13 03:49:07.498624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.521 [2024-12-13 03:49:07.498637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.521 qpair failed and we were unable to recover it. 00:38:06.521 [2024-12-13 03:49:07.498844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.521 [2024-12-13 03:49:07.498857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.521 qpair failed and we were unable to recover it. 00:38:06.521 [2024-12-13 03:49:07.498952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.521 [2024-12-13 03:49:07.498965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.521 qpair failed and we were unable to recover it. 00:38:06.521 [2024-12-13 03:49:07.499168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.521 [2024-12-13 03:49:07.499181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.521 qpair failed and we were unable to recover it. 00:38:06.521 [2024-12-13 03:49:07.499314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.521 [2024-12-13 03:49:07.499327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.521 qpair failed and we were unable to recover it. 00:38:06.521 [2024-12-13 03:49:07.499523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.521 [2024-12-13 03:49:07.499536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.521 qpair failed and we were unable to recover it. 00:38:06.521 [2024-12-13 03:49:07.499606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.521 [2024-12-13 03:49:07.499619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.521 qpair failed and we were unable to recover it. 00:38:06.521 [2024-12-13 03:49:07.499761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.521 [2024-12-13 03:49:07.499773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.521 qpair failed and we were unable to recover it. 00:38:06.521 [2024-12-13 03:49:07.499851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.521 [2024-12-13 03:49:07.499863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.521 qpair failed and we were unable to recover it. 00:38:06.521 [2024-12-13 03:49:07.499996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.521 [2024-12-13 03:49:07.500010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.521 qpair failed and we were unable to recover it. 00:38:06.521 [2024-12-13 03:49:07.500172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.521 [2024-12-13 03:49:07.500185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.521 qpair failed and we were unable to recover it. 00:38:06.521 [2024-12-13 03:49:07.500283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.521 [2024-12-13 03:49:07.500295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.521 qpair failed and we were unable to recover it. 00:38:06.521 [2024-12-13 03:49:07.500449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.521 [2024-12-13 03:49:07.500462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.521 qpair failed and we were unable to recover it. 00:38:06.521 [2024-12-13 03:49:07.500633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.521 [2024-12-13 03:49:07.500687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.521 qpair failed and we were unable to recover it. 00:38:06.521 [2024-12-13 03:49:07.500824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.521 [2024-12-13 03:49:07.500864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.521 qpair failed and we were unable to recover it. 00:38:06.521 [2024-12-13 03:49:07.501088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.521 [2024-12-13 03:49:07.501132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.521 qpair failed and we were unable to recover it. 00:38:06.521 [2024-12-13 03:49:07.501407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.521 [2024-12-13 03:49:07.501448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.521 qpair failed and we were unable to recover it. 00:38:06.521 [2024-12-13 03:49:07.501604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.521 [2024-12-13 03:49:07.501645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.521 qpair failed and we were unable to recover it. 00:38:06.521 [2024-12-13 03:49:07.501852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.521 [2024-12-13 03:49:07.501893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.521 qpair failed and we were unable to recover it. 00:38:06.521 [2024-12-13 03:49:07.502119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.521 [2024-12-13 03:49:07.502161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.521 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.502325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.502366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.502575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.502615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.502814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.502855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.503076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.503120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.503315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.503362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.503701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.503740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.503908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.503928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.504069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.504082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.504287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.504316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.504459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.504501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.504754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.504795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.505079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.505121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.505441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.505481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.505759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.505800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.506010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.506052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.506198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.506238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.506428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.506468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.506685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.506733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.507040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.507086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.507297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.507338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.507547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.507588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.507849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.507862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.508001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.508014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.508118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.508131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.508277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.508290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.508549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.508590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.508796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.508837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.509179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.509223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.509506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.509546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.509829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.509869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.510098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.510140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.510291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.510331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.510607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.510620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.510768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.510781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.510937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.510951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.511063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.511076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.511276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.511289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.511433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.511446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.511703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.511716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.511868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.511881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.511948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.511967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.512101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.512114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.522 [2024-12-13 03:49:07.512289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.522 [2024-12-13 03:49:07.512302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.522 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.512406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.512419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.512519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.512535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.512667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.512680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.512813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.512826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.513039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.513053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.513326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.513340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.513573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.513586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.513789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.513830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.514041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.514083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.514248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.514289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.514485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.514527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.514726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.514768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.515018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.515060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.515322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.515364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.515581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.515623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.515755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.515769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.515933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.515946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.516083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.516097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.516253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.516266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.516443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.516455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.516554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.516567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.516760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.516773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.516980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.516993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.517194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.517208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.517384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.517397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.517647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.517689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.517933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.517976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.518134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.518175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.518258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325d00 (9): Bad file descriptor 00:38:06.523 [2024-12-13 03:49:07.518598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.518683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.518952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.518997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.519192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.519217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.519314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.519336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.519492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.519513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.519754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.519774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.519885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.519907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.520033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.520054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.520248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.520269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.520435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.520455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.520652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.520694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.520990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.521034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.521333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.521386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.521554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.521575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.521795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.521816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.521988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.522009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.523 qpair failed and we were unable to recover it. 00:38:06.523 [2024-12-13 03:49:07.522192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.523 [2024-12-13 03:49:07.522233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.522513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.522554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.522783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.522825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.523047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.523090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.523301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.523343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.523651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.523672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.523921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.523947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.524050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.524071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.524225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.524245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.524409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.524430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.524725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.524750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.525000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.525017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.525169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.525183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.525350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.525363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.525464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.525477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.525565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.525577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.525797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.525810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.526033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.526047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.526151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.526164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.526332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.526345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.526479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.526492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.526769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.526810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.526983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.527028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.527156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.527198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.527461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.527503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.527714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.527727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.527901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.527914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.528097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.528138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.528420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.528461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.528761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.528802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.529024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.529067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.529301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.529343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.529557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.529570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.529772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.529785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.530021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.530035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.530147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.530160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.530249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.530261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.530372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.530386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.530562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.530576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.530802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.530826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.530913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.530931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.531084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.531097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.531334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.531347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.531441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.531454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.531701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.531730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.524 qpair failed and we were unable to recover it. 00:38:06.524 [2024-12-13 03:49:07.531908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.524 [2024-12-13 03:49:07.531926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.532080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.532093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.532248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.532261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.532360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.532372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.532451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.532464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.532724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.532740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.532895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.532909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.533121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.533162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.533395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.533436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.533572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.533614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.533840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.533853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.533967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.533981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.534138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.534152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.534335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.534349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.534494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.534507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.534651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.534665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.534909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.534961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.535157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.535197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.535395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.535436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.535717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.535759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.536045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.536087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.536297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.536338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.536601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.536614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.536758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.536771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.536995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.537008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.537183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.537232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.537440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.537481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.537762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.537802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.538055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.538098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.538229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.538272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.538464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.538505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.538780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.538822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.539082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.539095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.539260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.539274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.539373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.539385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.539607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.539621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.539766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.539778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.539986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.539999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.540077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.540090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.525 qpair failed and we were unable to recover it. 00:38:06.525 [2024-12-13 03:49:07.540228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.525 [2024-12-13 03:49:07.540241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.540386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.540402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.540638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.540652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.540785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.540798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.540944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.540957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.541107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.541120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.541331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.541347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.541499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.541528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.541830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.541871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.542046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.542089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.542349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.542390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.542608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.542621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.542820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.542833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.543011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.543025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.543256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.543297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.543449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.543490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.543678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.543720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.543991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.544005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.544167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.544180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.544332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.544345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.544576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.544589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.544873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.544913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.545144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.545184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.545351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.545394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.545614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.545703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.545973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.545988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.546153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.546167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.546263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.546276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.546381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.546394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.546500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.546516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.546825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.546838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.546978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.546992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.547134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.547147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.547260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.547274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.547365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.547377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.547442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.547454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.547627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.547640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.547732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.547744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.547879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.547892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.548069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.548083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.548366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.548408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.548709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.548750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.548928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.548942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.549116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.549158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.549372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.549413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.526 [2024-12-13 03:49:07.549621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.526 [2024-12-13 03:49:07.549662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.526 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.549785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.549800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.549934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.549948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.550037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.550049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.550139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.550151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.550240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.550252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.550358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.550371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.550461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.550473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.550566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.550579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.550843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.550856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.551008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.551022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.551231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.551244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.551349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.551362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.551641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.551654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.551898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.551912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.552176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.552189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.552326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.552339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.552495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.552508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.552683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.552696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.552928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.552942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.553123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.553136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.553245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.553259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.553414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.553427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.553507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.553520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.553804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.553818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.553883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.553896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.554011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.554024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.554176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.554189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.554293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.554306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.554393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.554405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.554493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.554505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.554728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.554742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.554878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.554891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.554975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.554987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.555063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.555074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.555206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.555219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.555421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.555475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.555624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.555665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.555944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.555991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.556097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.556110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.556201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.556213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.556283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.556299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.556556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.556570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.556785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.527 [2024-12-13 03:49:07.556802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.527 qpair failed and we were unable to recover it. 00:38:06.527 [2024-12-13 03:49:07.556948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.556962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.557073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.557086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.557251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.557264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.557413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.557426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.557696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.557737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.558069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.558112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.558254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.558295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.558522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.558564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.558780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.558821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.559054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.559067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.559232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.559246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.559397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.559410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.559661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.559686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.559885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.559898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.560065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.560079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.560232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.560245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.560342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.560354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.560449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.560462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.560637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.560651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.560873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.560916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.561140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.561181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.561459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.561501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.561774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.561815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.562018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.562032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.562264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.562277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.562374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.562386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.562557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.562570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.562710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.562724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.562875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.562888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.563042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.563056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.563208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.563221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.563372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.563385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.563459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.563471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.563549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.563561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.563734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.563748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.564003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.564044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.564255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.564297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.564615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.564661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.564802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.564844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.565151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.565193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.565445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.565487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.565744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.528 [2024-12-13 03:49:07.565786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.528 qpair failed and we were unable to recover it. 00:38:06.528 [2024-12-13 03:49:07.565992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.566006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.566091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.566104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.566253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.566266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.566443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.566456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.566719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.566760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.566967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.567010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.567295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.567336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.567640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.567680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.567894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.567947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.568264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.568305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.568513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.568555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.568815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.568855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.569082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.569125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.569382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.569422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.569752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.569794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.570073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.570116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.570275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.570316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.570518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.570558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.570814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.570852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.571000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.571013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.571186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.571201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.571298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.571315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.571470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.571483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.571741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.571783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.571976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.572018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.572172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.572214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.572444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.572484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.572626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.572669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.572953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.573001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.573103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.573116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.573297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.573311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.573569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.573611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.573868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.573910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.574136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.574178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.574382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.574423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.574625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.574672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.574859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.574872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.575105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.575119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.575323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.575336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.529 qpair failed and we were unable to recover it. 00:38:06.529 [2024-12-13 03:49:07.575552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.529 [2024-12-13 03:49:07.575565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.575764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.575777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.575955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.575968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.576141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.576181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.576395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.576437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.576659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.576707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.576911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.576928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.577082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.577096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.577197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.577210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.577413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.577427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.577632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.577656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.577838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.577880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.578069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.578113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.578261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.578301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.578564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.578604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.578827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.578869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.579103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.579117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.579318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.579331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.579417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.579429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.579681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.579694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.579891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.579904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.580080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.580093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.580244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.580257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.580514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.580557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.580861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.580903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.581164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.581205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.581418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.581458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.581786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.581827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.582057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.582071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.582167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.582179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.582286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.582299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.582446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.582459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.582603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.582634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.582862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.582905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.583111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.583153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.583438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.583479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.583771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.583787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.583952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.583965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.584060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.584073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.584224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.584237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.584371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.584385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.584697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.584738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.584943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.584984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.530 [2024-12-13 03:49:07.585197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.530 [2024-12-13 03:49:07.585238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.530 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.585450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.585491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.585683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.585696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.585951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.585970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.586108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.586121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.586272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.586285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.586489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.586502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.586653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.586667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.586867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.586928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.587155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.587196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.587388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.587429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.587632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.587645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.587808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.587822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.588045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.588087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.588292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.588334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.588568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.588609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.588887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.588939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.589166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.589207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.589462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.589502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.589647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.589688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.589967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.590067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.590333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.590379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.590577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.590622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.590865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.590880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.591034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.591047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.591214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.591227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.591404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.591417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.591675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.591688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.591839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.591852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.591987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.592000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.592144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.592157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.592257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.592269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.592420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.592434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.592624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.592640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.592806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.592820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.592904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.592924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.593129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.593142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.593299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.593312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.593452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.593465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.593663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.593677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.593812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.593825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.593959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.593973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.594123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.594136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.531 [2024-12-13 03:49:07.594295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.531 [2024-12-13 03:49:07.594309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.531 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.594511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.594525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.594769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.594783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.594983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.594996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.595151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.595165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.595267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.595280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.595387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.595400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.595670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.595709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.595913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.595966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.596224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.596265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.596469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.596510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.596702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.596715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.596922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.596935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.597087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.597101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.597181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.597193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.597477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.597490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.597720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.597733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.597999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.598044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.598233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.598260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.598391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.598420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.598622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.598637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.598846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.598859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.599077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.599090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.599260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.599301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.599544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.599585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.599741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.599782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.600034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.600048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.600131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.600143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.600242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.600255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.600460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.600474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.600583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.600603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.600850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.600863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.601021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.601035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.601127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.601140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.601289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.601303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.601477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.601491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.601660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.601674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.601808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.601821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.601965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.601979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.602240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.602253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.602470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.602513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.602717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.602757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.602957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.603002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.603227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.603240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.603332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.603344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.603558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.532 [2024-12-13 03:49:07.603572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.532 qpair failed and we were unable to recover it. 00:38:06.532 [2024-12-13 03:49:07.603803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.603816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.603963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.603976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.604145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.604159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.604304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.604318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.604482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.604524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.604727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.604770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.605002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.605045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.605281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.605321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.605521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.605562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.605767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.605807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.606012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.606067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.606183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.606210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.606314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.606336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.606556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.606577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.606762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.606777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.606927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.606941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.607188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.607231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.607457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.607497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.607806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.607848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.608089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.608131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.608295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.608334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.608563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.608604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.608743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.608784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.609075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.609117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.609401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.609448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.609677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.609718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.610003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.610045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.610211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.610252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.610409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.610451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.610687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.610726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.610866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.610907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.611079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.611120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.611333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.611374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.611679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.611721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.611968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.612011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.612211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.612252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.612511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.612554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.612826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.612866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.613140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.533 [2024-12-13 03:49:07.613182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.533 qpair failed and we were unable to recover it. 00:38:06.533 [2024-12-13 03:49:07.613413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.613453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.613751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.613791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.614030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.614043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.614286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.614300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.614530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.614544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.614642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.614665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.614800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.614812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.615035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.615049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.615201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.615214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.615374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.615387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.615567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.615580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.615742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.615755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.616044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.616062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.616163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.616176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.616266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.616278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.616429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.616443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.616661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.616675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.616925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.616939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.617110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.617123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.617296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.617309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.617463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.617476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.617700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.617713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.617876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.617889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.618046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.618060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.618273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.618314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.618519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.618568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.618776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.618817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.618994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.619008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.619156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.619185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.619329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.619370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.619580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.619622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.619899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.619913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.620088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.620102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.620259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.620273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.620361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.620374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.620627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.620668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.620869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.620910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.621194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.621237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.621450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.621490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.534 [2024-12-13 03:49:07.621756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.534 [2024-12-13 03:49:07.621799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.534 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.622062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.622105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.622315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.622355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.622553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.622594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.622821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.622863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.623159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.623202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.623439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.623481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.623697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.623737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.624023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.624066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.624215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.624255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.624469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.624509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.624662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.624675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.624835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.624874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.625100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.625149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.625414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.625456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.625755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.625796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.626061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.626104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.626311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.626353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.626567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.626607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.626756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.626798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.626960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.626973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.627159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.627202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.627411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.627451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.627659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.627700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.627803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.627816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.628073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.628087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.628242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.628256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.628467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.628509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.628783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.628826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.629108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.629150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.629364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.629405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.629661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.629703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.629855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.629895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.630123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.630163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.630335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.630348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.630451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.630464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.630701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.630715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.630860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.630878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.631036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.631050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.631252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.631265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.631469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.631482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.631673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.631714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.631971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.632013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.632154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.632196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.632456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.632497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.632799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.535 [2024-12-13 03:49:07.632841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.535 qpair failed and we were unable to recover it. 00:38:06.535 [2024-12-13 03:49:07.633125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.633168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.633435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.633476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.633752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.633793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.634041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.634085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.634382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.634424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.634644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.634685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.634877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.634930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.635168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.635186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.635280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.635293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.635476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.635489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.635645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.635658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.635907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.635927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.636126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.636169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.636466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.636507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.636776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.636818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.637042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.637085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.637294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.637336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.637533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.637574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.637781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.637794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.637930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.637943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.638095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.638108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.638266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.638279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.638441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.638482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.638621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.638661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.638891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.638944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.639127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.639141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.639218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.639230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.639378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.639391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.639570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.639584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.639758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.639771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.639987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.640001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.640157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.640171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.640323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.640337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.640578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.640591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.640819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.640833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.640989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.641003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.641157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.641170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.641400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.641440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.641587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.641627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.641901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.641956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.642200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.642240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.642497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.642537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.642818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.642859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.643094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.643107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.643211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.536 [2024-12-13 03:49:07.643225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.536 qpair failed and we were unable to recover it. 00:38:06.536 [2024-12-13 03:49:07.643376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.643389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.643622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.643635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.643880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.643896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.643999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.644012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.644154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.644167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.644320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.644333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.644430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.644443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.644703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.644716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.644827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.644840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.645053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.645067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.645216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.645234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.645436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.645450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.645653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.645666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.645889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.645903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.646010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.646022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.646167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.646181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.646275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.646288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.646370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.646382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.646563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.646577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.646664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.646678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.646846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.646861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.647027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.647041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.647138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.647151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.647230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.647242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.647386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.647400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.647649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.647663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.647805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.647819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.647915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.647932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.648126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.648140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.648313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.648354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.648549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.648592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.648849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.648889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.649073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.649088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.649290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.649303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.649437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.649452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.649683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.649726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.649969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.650011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.650298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.650339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.650559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.650600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.650802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.537 [2024-12-13 03:49:07.650816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.537 qpair failed and we were unable to recover it. 00:38:06.537 [2024-12-13 03:49:07.650975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.650989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.651210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.651223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.651305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.651321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.651425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.651438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.651608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.651622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.651805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.651849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.652065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.652107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.652323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.652364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.652520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.652562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.652856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.652898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.653181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.653226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.653427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.653451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.653555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.653576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.653829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.653850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.654015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.654037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.654200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.654221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.654391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.654408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.654551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.654564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.654732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.654747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.654897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.654910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.655017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.655030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.655126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.655139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.655291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.655306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.655440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.655453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.655679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.655693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.655869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.655883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.655975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.655988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.656235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.656250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.656409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.656422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.656523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.656536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.656748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.656762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.656842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.656855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.656989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.657003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.657153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.657194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.657485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.657527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.657748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.657790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.658013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.658057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.658267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.658310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.658473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.658514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.658768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.658813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.659077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.659096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.659192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.659204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.659311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.538 [2024-12-13 03:49:07.659327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.538 qpair failed and we were unable to recover it. 00:38:06.538 [2024-12-13 03:49:07.659464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.659477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.659614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.659628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.659859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.659900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.660202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.660246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.660440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.660483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.660683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.660724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.660928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.660972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.661216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.661230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.661375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.661389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.661592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.661606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.661794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.661837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.662056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.662099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.662247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.662260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.662358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.662370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.662593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.662607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.662750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.662763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.663008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.663051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.663228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.663271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.663406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.663448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.663734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.663776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.663901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.663915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.664079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.664092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.664241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.664255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.664353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.664366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.664449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.664462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.664645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.664659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.664882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.664897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.664986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.664999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.665077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.665089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.665197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.665210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.665293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.665306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.665460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.665475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.665625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.665638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.665876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.665930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.666217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.666259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.666403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.666445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.666660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.666702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.666900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.666978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.667089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.667103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.667262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.667280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.667434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.667447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.667684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.667725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.667981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.668024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.668164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.539 [2024-12-13 03:49:07.668209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.539 qpair failed and we were unable to recover it. 00:38:06.539 [2024-12-13 03:49:07.668292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.668307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.668396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.668408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.668498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.668512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.668681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.668694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.668782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.668795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.668945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.668959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.669046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.669059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.669165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.669177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.669314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.669328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.669592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.669607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.669756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.669770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.669922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.669935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.670019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.670034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.670140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.670152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.670304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.670318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.670472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.670485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.670654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.670712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.671007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.671061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.671225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.671258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.671415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.671430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.671574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.671587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.671696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.671710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.671858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.671871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.671969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.671982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.672137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.672150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.672252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.672265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.672409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.672423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.672580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.672593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.672671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.672684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.672752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.672765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.672900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.672913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.673068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.673081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.673166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.673177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.673246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.673258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.673480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.673521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.673748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.673802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.674017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.674055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.674154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.674167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.674278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.674292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.674408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.674425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.674521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.674538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.674635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.674650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.674807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.674821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.674936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.674950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.675100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.675114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.540 [2024-12-13 03:49:07.675206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.540 [2024-12-13 03:49:07.675218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.540 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.675358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.675372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.675520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.675536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.675615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.675627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.675777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.675791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.676035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.676050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.676210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.676224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.676314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.676326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.676457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.676472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.676656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.676670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.676869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.676915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.677181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.677233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.677383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.677424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.677654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.677698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.677981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.678025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.678149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.678162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.678336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.678349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.678555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.678601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.678767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.678812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.679053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.679099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.679275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.679290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.679413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.679427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.679584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.679598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.679847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.679861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.680069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.680083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.680230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.680243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.680494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.680508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.680648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.680661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.680880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.680930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.681097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.681139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.681405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.681452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.681725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.681766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.682024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.682067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.682243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.682256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.682503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.682517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.682747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.682761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.682912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.682931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.683087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.683101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.541 [2024-12-13 03:49:07.683264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.541 [2024-12-13 03:49:07.683280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.541 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.683473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.683528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.683772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.683830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.684005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.684033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.684139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.684168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.684344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.684360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.684445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.684459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.684595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.684608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.684744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.684757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.684903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.684922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.685149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.685163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.685371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.685386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.685554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.685567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.685784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.685826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.686028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.686072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.686305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.686348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.686653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.686693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.686955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.686998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.687149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.687192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.687403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.687452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.687740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.687792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.688081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.688107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.688209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.688225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.688381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.688394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.688500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.688513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.688615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.688629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.688717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.688730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.688820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.688834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.688932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.688945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.689029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.689041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.689193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.689207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.689274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.689287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.689378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.689392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.689475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.689487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.689674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.689687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.689843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.689856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.689938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.689951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.690123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.690166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.690365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.690408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.690561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.690603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.542 qpair failed and we were unable to recover it. 00:38:06.542 [2024-12-13 03:49:07.690829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.542 [2024-12-13 03:49:07.690887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.831 qpair failed and we were unable to recover it. 00:38:06.831 [2024-12-13 03:49:07.691124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.831 [2024-12-13 03:49:07.691153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.831 qpair failed and we were unable to recover it. 00:38:06.831 [2024-12-13 03:49:07.691401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.831 [2024-12-13 03:49:07.691426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.831 qpair failed and we were unable to recover it. 00:38:06.831 [2024-12-13 03:49:07.691577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.831 [2024-12-13 03:49:07.691592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.831 qpair failed and we were unable to recover it. 00:38:06.831 [2024-12-13 03:49:07.691680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.831 [2024-12-13 03:49:07.691693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.831 qpair failed and we were unable to recover it. 00:38:06.831 [2024-12-13 03:49:07.691783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.831 [2024-12-13 03:49:07.691796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.831 qpair failed and we were unable to recover it. 00:38:06.831 [2024-12-13 03:49:07.691893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.831 [2024-12-13 03:49:07.691907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.831 qpair failed and we were unable to recover it. 00:38:06.831 [2024-12-13 03:49:07.692048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.831 [2024-12-13 03:49:07.692062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.831 qpair failed and we were unable to recover it. 00:38:06.831 [2024-12-13 03:49:07.692195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.831 [2024-12-13 03:49:07.692208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.831 qpair failed and we were unable to recover it. 00:38:06.831 [2024-12-13 03:49:07.692299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.831 [2024-12-13 03:49:07.692311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.831 qpair failed and we were unable to recover it. 00:38:06.831 [2024-12-13 03:49:07.692444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.831 [2024-12-13 03:49:07.692458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.831 qpair failed and we were unable to recover it. 00:38:06.831 [2024-12-13 03:49:07.692537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.831 [2024-12-13 03:49:07.692550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.831 qpair failed and we were unable to recover it. 00:38:06.831 [2024-12-13 03:49:07.692623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.831 [2024-12-13 03:49:07.692636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.831 qpair failed and we were unable to recover it. 00:38:06.831 [2024-12-13 03:49:07.692794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.831 [2024-12-13 03:49:07.692807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.831 qpair failed and we were unable to recover it. 00:38:06.831 [2024-12-13 03:49:07.692892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.831 [2024-12-13 03:49:07.692905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.831 qpair failed and we were unable to recover it. 00:38:06.831 [2024-12-13 03:49:07.693003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.831 [2024-12-13 03:49:07.693017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.831 qpair failed and we were unable to recover it. 00:38:06.831 [2024-12-13 03:49:07.693082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.831 [2024-12-13 03:49:07.693095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.831 qpair failed and we were unable to recover it. 00:38:06.831 [2024-12-13 03:49:07.693250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.831 [2024-12-13 03:49:07.693263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.831 qpair failed and we were unable to recover it. 00:38:06.831 [2024-12-13 03:49:07.693349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.831 [2024-12-13 03:49:07.693363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.831 qpair failed and we were unable to recover it. 00:38:06.831 [2024-12-13 03:49:07.693458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.831 [2024-12-13 03:49:07.693482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.831 qpair failed and we were unable to recover it. 00:38:06.831 [2024-12-13 03:49:07.693580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.831 [2024-12-13 03:49:07.693606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.831 qpair failed and we were unable to recover it. 00:38:06.831 [2024-12-13 03:49:07.693792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.831 [2024-12-13 03:49:07.693817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.831 qpair failed and we were unable to recover it. 00:38:06.831 [2024-12-13 03:49:07.694009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.831 [2024-12-13 03:49:07.694033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.831 qpair failed and we were unable to recover it. 00:38:06.831 [2024-12-13 03:49:07.694113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.831 [2024-12-13 03:49:07.694125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.831 qpair failed and we were unable to recover it. 00:38:06.831 [2024-12-13 03:49:07.694218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.831 [2024-12-13 03:49:07.694231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.831 qpair failed and we were unable to recover it. 00:38:06.831 [2024-12-13 03:49:07.694364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.831 [2024-12-13 03:49:07.694376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.831 qpair failed and we were unable to recover it. 00:38:06.831 [2024-12-13 03:49:07.694508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.831 [2024-12-13 03:49:07.694521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.831 qpair failed and we were unable to recover it. 00:38:06.831 [2024-12-13 03:49:07.694678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.831 [2024-12-13 03:49:07.694691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.831 qpair failed and we were unable to recover it. 00:38:06.831 [2024-12-13 03:49:07.694768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.831 [2024-12-13 03:49:07.694781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.831 qpair failed and we were unable to recover it. 00:38:06.831 [2024-12-13 03:49:07.694851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.831 [2024-12-13 03:49:07.694864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.831 qpair failed and we were unable to recover it. 00:38:06.831 [2024-12-13 03:49:07.694946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.831 [2024-12-13 03:49:07.694960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.831 qpair failed and we were unable to recover it. 00:38:06.831 [2024-12-13 03:49:07.695039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.831 [2024-12-13 03:49:07.695051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.831 qpair failed and we were unable to recover it. 00:38:06.831 [2024-12-13 03:49:07.695134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.831 [2024-12-13 03:49:07.695149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.831 qpair failed and we were unable to recover it. 00:38:06.831 [2024-12-13 03:49:07.695225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.832 [2024-12-13 03:49:07.695237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.832 qpair failed and we were unable to recover it. 00:38:06.832 [2024-12-13 03:49:07.695438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.832 [2024-12-13 03:49:07.695451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.832 qpair failed and we were unable to recover it. 00:38:06.832 [2024-12-13 03:49:07.695525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.832 [2024-12-13 03:49:07.695538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.832 qpair failed and we were unable to recover it. 00:38:06.832 [2024-12-13 03:49:07.695624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.832 [2024-12-13 03:49:07.695637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.832 qpair failed and we were unable to recover it. 00:38:06.832 [2024-12-13 03:49:07.695702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.832 [2024-12-13 03:49:07.695719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.832 qpair failed and we were unable to recover it. 00:38:06.832 [2024-12-13 03:49:07.695876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.832 [2024-12-13 03:49:07.695888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.832 qpair failed and we were unable to recover it. 00:38:06.832 [2024-12-13 03:49:07.695984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.832 [2024-12-13 03:49:07.695998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.832 qpair failed and we were unable to recover it. 00:38:06.832 [2024-12-13 03:49:07.696080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.832 [2024-12-13 03:49:07.696093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.832 qpair failed and we were unable to recover it. 00:38:06.832 [2024-12-13 03:49:07.696176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.832 [2024-12-13 03:49:07.696189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.832 qpair failed and we were unable to recover it. 00:38:06.832 [2024-12-13 03:49:07.696335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.832 [2024-12-13 03:49:07.696349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.832 qpair failed and we were unable to recover it. 00:38:06.832 [2024-12-13 03:49:07.696594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.832 [2024-12-13 03:49:07.696608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.832 qpair failed and we were unable to recover it. 00:38:06.832 [2024-12-13 03:49:07.696688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.832 [2024-12-13 03:49:07.696700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.832 qpair failed and we were unable to recover it. 00:38:06.832 [2024-12-13 03:49:07.696840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.832 [2024-12-13 03:49:07.696855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.832 qpair failed and we were unable to recover it. 00:38:06.832 [2024-12-13 03:49:07.697025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.832 [2024-12-13 03:49:07.697038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.832 qpair failed and we were unable to recover it. 00:38:06.832 [2024-12-13 03:49:07.697119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.832 [2024-12-13 03:49:07.697131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.832 qpair failed and we were unable to recover it. 00:38:06.832 [2024-12-13 03:49:07.697209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.832 [2024-12-13 03:49:07.697222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.832 qpair failed and we were unable to recover it. 00:38:06.832 [2024-12-13 03:49:07.697295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.832 [2024-12-13 03:49:07.697307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.832 qpair failed and we were unable to recover it. 00:38:06.832 [2024-12-13 03:49:07.697456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.832 [2024-12-13 03:49:07.697469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.832 qpair failed and we were unable to recover it. 00:38:06.832 [2024-12-13 03:49:07.697538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.832 [2024-12-13 03:49:07.697551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.832 qpair failed and we were unable to recover it. 00:38:06.832 [2024-12-13 03:49:07.697686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.832 [2024-12-13 03:49:07.697699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.832 qpair failed and we were unable to recover it. 00:38:06.832 [2024-12-13 03:49:07.697789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.832 [2024-12-13 03:49:07.697802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.832 qpair failed and we were unable to recover it. 00:38:06.832 [2024-12-13 03:49:07.697866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.832 [2024-12-13 03:49:07.697880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.832 qpair failed and we were unable to recover it. 00:38:06.832 [2024-12-13 03:49:07.698019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.832 [2024-12-13 03:49:07.698032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.832 qpair failed and we were unable to recover it. 00:38:06.832 [2024-12-13 03:49:07.698122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.832 [2024-12-13 03:49:07.698134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.832 qpair failed and we were unable to recover it. 00:38:06.832 [2024-12-13 03:49:07.698218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.832 [2024-12-13 03:49:07.698231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.832 qpair failed and we were unable to recover it. 00:38:06.832 [2024-12-13 03:49:07.698369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.832 [2024-12-13 03:49:07.698383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.832 qpair failed and we were unable to recover it. 00:38:06.832 [2024-12-13 03:49:07.698475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.832 [2024-12-13 03:49:07.698504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.832 qpair failed and we were unable to recover it. 00:38:06.832 [2024-12-13 03:49:07.698602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.832 [2024-12-13 03:49:07.698628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.832 qpair failed and we were unable to recover it. 00:38:06.832 [2024-12-13 03:49:07.698821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.832 [2024-12-13 03:49:07.698849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.832 qpair failed and we were unable to recover it. 00:38:06.832 [2024-12-13 03:49:07.699019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.832 [2024-12-13 03:49:07.699035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.832 qpair failed and we were unable to recover it. 00:38:06.832 [2024-12-13 03:49:07.699203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.832 [2024-12-13 03:49:07.699216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.832 qpair failed and we were unable to recover it. 00:38:06.832 [2024-12-13 03:49:07.699435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.832 [2024-12-13 03:49:07.699448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.832 qpair failed and we were unable to recover it. 00:38:06.832 [2024-12-13 03:49:07.699532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.832 [2024-12-13 03:49:07.699546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.832 qpair failed and we were unable to recover it. 00:38:06.832 [2024-12-13 03:49:07.699796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.832 [2024-12-13 03:49:07.699809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.832 qpair failed and we were unable to recover it. 00:38:06.832 [2024-12-13 03:49:07.699974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.832 [2024-12-13 03:49:07.699988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.832 qpair failed and we were unable to recover it. 00:38:06.832 [2024-12-13 03:49:07.700138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.832 [2024-12-13 03:49:07.700152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.832 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.700364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.700378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.700557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.700570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.700706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.700719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.700850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.700896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.701158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.701205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.701411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.701432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.701670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.701690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.701869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.701883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.701991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.702005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.702153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.702166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.702381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.702394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.702539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.702553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.702703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.702717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.702868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.702885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.703039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.703053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.703208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.703223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.703456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.703469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.703632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.703646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.703793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.703831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.704055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.704098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.704419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.704461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.704718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.704760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.704979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.705021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.705229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.705272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.705549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.705590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.705815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.705855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.706027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.706069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.706289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.706330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.706594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.706636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.706867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.706908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.707166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.707212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.707322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.707344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.707496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.707519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.707670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.707690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.707934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.707979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.708211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.708256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.708590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.708635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.708939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.708984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.833 qpair failed and we were unable to recover it. 00:38:06.833 [2024-12-13 03:49:07.709237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.833 [2024-12-13 03:49:07.709278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.709571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.709612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.709807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.709847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.710150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.710164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.710337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.710351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.710539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.710592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.710867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.710960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.711162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.711175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.711395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.711408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.711549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.711563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.711775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.711817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.712057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.712100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.712359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.712401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.712688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.712729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.713005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.713020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.713118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.713131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.713288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.713301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.713394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.713406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.713485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.713497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.713654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.713669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.713887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.713941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.714184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.714225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.714500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.714543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.714841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.714880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.715082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.715096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.715176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.715189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.715362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.715375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.715527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.715540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.715680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.715693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.715859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.715872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.716029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.716043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.716138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.716151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.716395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.716409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.716576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.716617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.716814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.716855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.717152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.717201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.717411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.717425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.717574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.717588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.834 [2024-12-13 03:49:07.717776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.834 [2024-12-13 03:49:07.717817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.834 qpair failed and we were unable to recover it. 00:38:06.835 [2024-12-13 03:49:07.718100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.835 [2024-12-13 03:49:07.718143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.835 qpair failed and we were unable to recover it. 00:38:06.835 [2024-12-13 03:49:07.718438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.835 [2024-12-13 03:49:07.718452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.835 qpair failed and we were unable to recover it. 00:38:06.835 [2024-12-13 03:49:07.718671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.835 [2024-12-13 03:49:07.718684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.835 qpair failed and we were unable to recover it. 00:38:06.835 [2024-12-13 03:49:07.718833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.835 [2024-12-13 03:49:07.718846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.835 qpair failed and we were unable to recover it. 00:38:06.835 [2024-12-13 03:49:07.719000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.835 [2024-12-13 03:49:07.719014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.835 qpair failed and we were unable to recover it. 00:38:06.835 [2024-12-13 03:49:07.719213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.835 [2024-12-13 03:49:07.719227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.835 qpair failed and we were unable to recover it. 00:38:06.835 [2024-12-13 03:49:07.719294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.835 [2024-12-13 03:49:07.719308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.835 qpair failed and we were unable to recover it. 00:38:06.835 [2024-12-13 03:49:07.719391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.835 [2024-12-13 03:49:07.719403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.835 qpair failed and we were unable to recover it. 00:38:06.835 [2024-12-13 03:49:07.719545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.835 [2024-12-13 03:49:07.719558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.835 qpair failed and we were unable to recover it. 00:38:06.835 [2024-12-13 03:49:07.719770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.835 [2024-12-13 03:49:07.719785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.835 qpair failed and we were unable to recover it. 00:38:06.835 [2024-12-13 03:49:07.720016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.835 [2024-12-13 03:49:07.720031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.835 qpair failed and we were unable to recover it. 00:38:06.835 [2024-12-13 03:49:07.720320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.835 [2024-12-13 03:49:07.720361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.835 qpair failed and we were unable to recover it. 00:38:06.835 [2024-12-13 03:49:07.720622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.835 [2024-12-13 03:49:07.720666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.835 qpair failed and we were unable to recover it. 00:38:06.835 [2024-12-13 03:49:07.720962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.835 [2024-12-13 03:49:07.720976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.835 qpair failed and we were unable to recover it. 00:38:06.835 [2024-12-13 03:49:07.721141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.835 [2024-12-13 03:49:07.721155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.835 qpair failed and we were unable to recover it. 00:38:06.835 [2024-12-13 03:49:07.721296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.835 [2024-12-13 03:49:07.721309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.835 qpair failed and we were unable to recover it. 00:38:06.835 [2024-12-13 03:49:07.721511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.835 [2024-12-13 03:49:07.721524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.835 qpair failed and we were unable to recover it. 00:38:06.835 [2024-12-13 03:49:07.721661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.835 [2024-12-13 03:49:07.721674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.835 qpair failed and we were unable to recover it. 00:38:06.835 [2024-12-13 03:49:07.721826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.835 [2024-12-13 03:49:07.721840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.835 qpair failed and we were unable to recover it. 00:38:06.835 [2024-12-13 03:49:07.722031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.835 [2024-12-13 03:49:07.722044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.835 qpair failed and we were unable to recover it. 00:38:06.835 [2024-12-13 03:49:07.722247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.835 [2024-12-13 03:49:07.722262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.835 qpair failed and we were unable to recover it. 00:38:06.835 [2024-12-13 03:49:07.722464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.835 [2024-12-13 03:49:07.722477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.835 qpair failed and we were unable to recover it. 00:38:06.835 [2024-12-13 03:49:07.722544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.835 [2024-12-13 03:49:07.722557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.835 qpair failed and we were unable to recover it. 00:38:06.835 [2024-12-13 03:49:07.722696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.835 [2024-12-13 03:49:07.722709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.835 qpair failed and we were unable to recover it. 00:38:06.835 [2024-12-13 03:49:07.722852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.835 [2024-12-13 03:49:07.722866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.835 qpair failed and we were unable to recover it. 00:38:06.835 [2024-12-13 03:49:07.723071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.835 [2024-12-13 03:49:07.723086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.835 qpair failed and we were unable to recover it. 00:38:06.835 [2024-12-13 03:49:07.723188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.835 [2024-12-13 03:49:07.723202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.835 qpair failed and we were unable to recover it. 00:38:06.835 [2024-12-13 03:49:07.723376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.835 [2024-12-13 03:49:07.723389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.835 qpair failed and we were unable to recover it. 00:38:06.835 [2024-12-13 03:49:07.723601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.835 [2024-12-13 03:49:07.723644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.835 qpair failed and we were unable to recover it. 00:38:06.835 [2024-12-13 03:49:07.723867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.835 [2024-12-13 03:49:07.723907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.835 qpair failed and we were unable to recover it. 00:38:06.835 [2024-12-13 03:49:07.724235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.836 [2024-12-13 03:49:07.724279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.836 qpair failed and we were unable to recover it. 00:38:06.836 [2024-12-13 03:49:07.724554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.836 [2024-12-13 03:49:07.724609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.836 qpair failed and we were unable to recover it. 00:38:06.836 [2024-12-13 03:49:07.724898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.836 [2024-12-13 03:49:07.724969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.836 qpair failed and we were unable to recover it. 00:38:06.836 [2024-12-13 03:49:07.725173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.836 [2024-12-13 03:49:07.725229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.836 qpair failed and we were unable to recover it. 00:38:06.836 [2024-12-13 03:49:07.725491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.836 [2024-12-13 03:49:07.725534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.836 qpair failed and we were unable to recover it. 00:38:06.836 [2024-12-13 03:49:07.725733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.836 [2024-12-13 03:49:07.725775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.836 qpair failed and we were unable to recover it. 00:38:06.836 [2024-12-13 03:49:07.725936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.836 [2024-12-13 03:49:07.725981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.836 qpair failed and we were unable to recover it. 00:38:06.836 [2024-12-13 03:49:07.726254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.836 [2024-12-13 03:49:07.726267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.836 qpair failed and we were unable to recover it. 00:38:06.836 [2024-12-13 03:49:07.726487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.836 [2024-12-13 03:49:07.726500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.836 qpair failed and we were unable to recover it. 00:38:06.836 [2024-12-13 03:49:07.726706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.836 [2024-12-13 03:49:07.726719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.836 qpair failed and we were unable to recover it. 00:38:06.836 [2024-12-13 03:49:07.726814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.836 [2024-12-13 03:49:07.726826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.836 qpair failed and we were unable to recover it. 00:38:06.836 [2024-12-13 03:49:07.726955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.836 [2024-12-13 03:49:07.726969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.836 qpair failed and we were unable to recover it. 00:38:06.836 [2024-12-13 03:49:07.727126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.836 [2024-12-13 03:49:07.727139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.836 qpair failed and we were unable to recover it. 00:38:06.836 [2024-12-13 03:49:07.727371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.836 [2024-12-13 03:49:07.727412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.836 qpair failed and we were unable to recover it. 00:38:06.836 [2024-12-13 03:49:07.727557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.836 [2024-12-13 03:49:07.727599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.836 qpair failed and we were unable to recover it. 00:38:06.836 [2024-12-13 03:49:07.727745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.836 [2024-12-13 03:49:07.727788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.836 qpair failed and we were unable to recover it. 00:38:06.836 [2024-12-13 03:49:07.727994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.836 [2024-12-13 03:49:07.728008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.836 qpair failed and we were unable to recover it. 00:38:06.836 [2024-12-13 03:49:07.728157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.836 [2024-12-13 03:49:07.728171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.836 qpair failed and we were unable to recover it. 00:38:06.836 [2024-12-13 03:49:07.728391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.836 [2024-12-13 03:49:07.728432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.836 qpair failed and we were unable to recover it. 00:38:06.836 [2024-12-13 03:49:07.728664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.836 [2024-12-13 03:49:07.728707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.836 qpair failed and we were unable to recover it. 00:38:06.836 [2024-12-13 03:49:07.728971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.836 [2024-12-13 03:49:07.729015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.836 qpair failed and we were unable to recover it. 00:38:06.836 [2024-12-13 03:49:07.729279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.836 [2024-12-13 03:49:07.729293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.836 qpair failed and we were unable to recover it. 00:38:06.836 [2024-12-13 03:49:07.729443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.836 [2024-12-13 03:49:07.729457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.836 qpair failed and we were unable to recover it. 00:38:06.836 [2024-12-13 03:49:07.729647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.836 [2024-12-13 03:49:07.729661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.836 qpair failed and we were unable to recover it. 00:38:06.836 [2024-12-13 03:49:07.729905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.836 [2024-12-13 03:49:07.729925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.836 qpair failed and we were unable to recover it. 00:38:06.836 [2024-12-13 03:49:07.730080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.836 [2024-12-13 03:49:07.730094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.836 qpair failed and we were unable to recover it. 00:38:06.836 [2024-12-13 03:49:07.730324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.836 [2024-12-13 03:49:07.730368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.836 qpair failed and we were unable to recover it. 00:38:06.836 [2024-12-13 03:49:07.730578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.836 [2024-12-13 03:49:07.730619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.836 qpair failed and we were unable to recover it. 00:38:06.836 [2024-12-13 03:49:07.730821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.836 [2024-12-13 03:49:07.730863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.836 qpair failed and we were unable to recover it. 00:38:06.836 [2024-12-13 03:49:07.731073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.836 [2024-12-13 03:49:07.731116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.836 qpair failed and we were unable to recover it. 00:38:06.836 [2024-12-13 03:49:07.731425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.836 [2024-12-13 03:49:07.731468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.836 qpair failed and we were unable to recover it. 00:38:06.836 [2024-12-13 03:49:07.731705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.836 [2024-12-13 03:49:07.731746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.836 qpair failed and we were unable to recover it. 00:38:06.836 [2024-12-13 03:49:07.731951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.836 [2024-12-13 03:49:07.731994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.836 qpair failed and we were unable to recover it. 00:38:06.836 [2024-12-13 03:49:07.732172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.836 [2024-12-13 03:49:07.732186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.836 qpair failed and we were unable to recover it. 00:38:06.836 [2024-12-13 03:49:07.732340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.836 [2024-12-13 03:49:07.732354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.836 qpair failed and we were unable to recover it. 00:38:06.836 [2024-12-13 03:49:07.732441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.836 [2024-12-13 03:49:07.732454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.836 qpair failed and we were unable to recover it. 00:38:06.836 [2024-12-13 03:49:07.732606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.836 [2024-12-13 03:49:07.732619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.732837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.732879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.733093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.733136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.733337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.733386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.733542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.733585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.733782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.733823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.733956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.733970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.734244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.734261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.734466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.734480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.734712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.734725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.734928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.734942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.735104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.735119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.735331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.735346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.735594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.735607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.735753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.735766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.735906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.735926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.736144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.736158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.736316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.736331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.736548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.736561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.736773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.736786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.736948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.736962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.737130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.737146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.737369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.737383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.737633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.737647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.737797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.737811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.738044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.738086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.738394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.738436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.738715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.738757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.739074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.739119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.739328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.739384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.739598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.739640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.739780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.739824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.740118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.740162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.740382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.740396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.740624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.740637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.740862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.740877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.741040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.741054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.741256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.741270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.837 qpair failed and we were unable to recover it. 00:38:06.837 [2024-12-13 03:49:07.741413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.837 [2024-12-13 03:49:07.741426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.838 qpair failed and we were unable to recover it. 00:38:06.838 [2024-12-13 03:49:07.741563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.838 [2024-12-13 03:49:07.741577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.838 qpair failed and we were unable to recover it. 00:38:06.838 [2024-12-13 03:49:07.741781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.838 [2024-12-13 03:49:07.741796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.838 qpair failed and we were unable to recover it. 00:38:06.838 [2024-12-13 03:49:07.741939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.838 [2024-12-13 03:49:07.741952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.838 qpair failed and we were unable to recover it. 00:38:06.838 [2024-12-13 03:49:07.742090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.838 [2024-12-13 03:49:07.742104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.838 qpair failed and we were unable to recover it. 00:38:06.838 [2024-12-13 03:49:07.742329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.838 [2024-12-13 03:49:07.742342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.838 qpair failed and we were unable to recover it. 00:38:06.838 [2024-12-13 03:49:07.742583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.838 [2024-12-13 03:49:07.742596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.838 qpair failed and we were unable to recover it. 00:38:06.838 [2024-12-13 03:49:07.742684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.838 [2024-12-13 03:49:07.742698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.838 qpair failed and we were unable to recover it. 00:38:06.838 [2024-12-13 03:49:07.742849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.838 [2024-12-13 03:49:07.742864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.838 qpair failed and we were unable to recover it. 00:38:06.838 [2024-12-13 03:49:07.743075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.838 [2024-12-13 03:49:07.743092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.838 qpair failed and we were unable to recover it. 00:38:06.838 [2024-12-13 03:49:07.743248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.838 [2024-12-13 03:49:07.743262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.838 qpair failed and we were unable to recover it. 00:38:06.838 [2024-12-13 03:49:07.743484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.838 [2024-12-13 03:49:07.743526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.838 qpair failed and we were unable to recover it. 00:38:06.838 [2024-12-13 03:49:07.743726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.838 [2024-12-13 03:49:07.743769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.838 qpair failed and we were unable to recover it. 00:38:06.838 [2024-12-13 03:49:07.743962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.838 [2024-12-13 03:49:07.744005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.838 qpair failed and we were unable to recover it. 00:38:06.838 [2024-12-13 03:49:07.744201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.838 [2024-12-13 03:49:07.744214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.838 qpair failed and we were unable to recover it. 00:38:06.838 [2024-12-13 03:49:07.744377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.838 [2024-12-13 03:49:07.744390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.838 qpair failed and we were unable to recover it. 00:38:06.838 [2024-12-13 03:49:07.744624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.838 [2024-12-13 03:49:07.744666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.838 qpair failed and we were unable to recover it. 00:38:06.838 [2024-12-13 03:49:07.744816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.838 [2024-12-13 03:49:07.744859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.838 qpair failed and we were unable to recover it. 00:38:06.838 [2024-12-13 03:49:07.745135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.838 [2024-12-13 03:49:07.745177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.838 qpair failed and we were unable to recover it. 00:38:06.838 [2024-12-13 03:49:07.745324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.838 [2024-12-13 03:49:07.745337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.838 qpair failed and we were unable to recover it. 00:38:06.838 [2024-12-13 03:49:07.745499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.838 [2024-12-13 03:49:07.745512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.838 qpair failed and we were unable to recover it. 00:38:06.838 [2024-12-13 03:49:07.745708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.838 [2024-12-13 03:49:07.745723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.838 qpair failed and we were unable to recover it. 00:38:06.838 [2024-12-13 03:49:07.745897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.838 [2024-12-13 03:49:07.745911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.838 qpair failed and we were unable to recover it. 00:38:06.838 [2024-12-13 03:49:07.746126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.838 [2024-12-13 03:49:07.746141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.838 qpair failed and we were unable to recover it. 00:38:06.838 [2024-12-13 03:49:07.746275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.838 [2024-12-13 03:49:07.746289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.838 qpair failed and we were unable to recover it. 00:38:06.838 [2024-12-13 03:49:07.746374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.838 [2024-12-13 03:49:07.746386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.838 qpair failed and we were unable to recover it. 00:38:06.838 [2024-12-13 03:49:07.746548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.838 [2024-12-13 03:49:07.746561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.838 qpair failed and we were unable to recover it. 00:38:06.838 [2024-12-13 03:49:07.746746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.838 [2024-12-13 03:49:07.746760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.838 qpair failed and we were unable to recover it. 00:38:06.838 [2024-12-13 03:49:07.746855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.838 [2024-12-13 03:49:07.746868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.838 qpair failed and we were unable to recover it. 00:38:06.838 [2024-12-13 03:49:07.747055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.838 [2024-12-13 03:49:07.747069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.838 qpair failed and we were unable to recover it. 00:38:06.838 [2024-12-13 03:49:07.747146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.838 [2024-12-13 03:49:07.747159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.838 qpair failed and we were unable to recover it. 00:38:06.838 [2024-12-13 03:49:07.747313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.838 [2024-12-13 03:49:07.747326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.838 qpair failed and we were unable to recover it. 00:38:06.838 [2024-12-13 03:49:07.747508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.838 [2024-12-13 03:49:07.747521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.838 qpair failed and we were unable to recover it. 00:38:06.838 [2024-12-13 03:49:07.747686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.838 [2024-12-13 03:49:07.747701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.838 qpair failed and we were unable to recover it. 00:38:06.838 [2024-12-13 03:49:07.747863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.838 [2024-12-13 03:49:07.747876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.838 qpair failed and we were unable to recover it. 00:38:06.838 [2024-12-13 03:49:07.748022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.748036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.748126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.748138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.748304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.748318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.748540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.748553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.748657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.748671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.748928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.748943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.749098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.749112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.749345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.749360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.749521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.749534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.749761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.749774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.749946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.749963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.750254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.750269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.750421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.750435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.750655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.750670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.750882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.750899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.751064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.751078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.751225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.751239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.751439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.751453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.751593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.751607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.751841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.751856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.751989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.752027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.752122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.752135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.752227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.752240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.752323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.752335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.752435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.752448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.752588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.752601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.752772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.752788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.752891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.752905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.753012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.753026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.753122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.753134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.753337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.753351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.753505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.753518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.753669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.753682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.753901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.753915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.754066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.754080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.754246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.754260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.754449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.754463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.754654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.754667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.839 [2024-12-13 03:49:07.754746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.839 [2024-12-13 03:49:07.754759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.839 qpair failed and we were unable to recover it. 00:38:06.840 [2024-12-13 03:49:07.754899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.840 [2024-12-13 03:49:07.754912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.840 qpair failed and we were unable to recover it. 00:38:06.840 [2024-12-13 03:49:07.755136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.840 [2024-12-13 03:49:07.755150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.840 qpair failed and we were unable to recover it. 00:38:06.840 [2024-12-13 03:49:07.755253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.840 [2024-12-13 03:49:07.755265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.840 qpair failed and we were unable to recover it. 00:38:06.840 [2024-12-13 03:49:07.755524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.840 [2024-12-13 03:49:07.755538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.840 qpair failed and we were unable to recover it. 00:38:06.840 [2024-12-13 03:49:07.755761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.840 [2024-12-13 03:49:07.755776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.840 qpair failed and we were unable to recover it. 00:38:06.840 [2024-12-13 03:49:07.755930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.840 [2024-12-13 03:49:07.755944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.840 qpair failed and we were unable to recover it. 00:38:06.840 [2024-12-13 03:49:07.756175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.840 [2024-12-13 03:49:07.756188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.840 qpair failed and we were unable to recover it. 00:38:06.840 [2024-12-13 03:49:07.756282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.840 [2024-12-13 03:49:07.756295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.840 qpair failed and we were unable to recover it. 00:38:06.840 [2024-12-13 03:49:07.756485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.840 [2024-12-13 03:49:07.756498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.840 qpair failed and we were unable to recover it. 00:38:06.840 [2024-12-13 03:49:07.756581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.840 [2024-12-13 03:49:07.756594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.840 qpair failed and we were unable to recover it. 00:38:06.840 [2024-12-13 03:49:07.756726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.840 [2024-12-13 03:49:07.756741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.840 qpair failed and we were unable to recover it. 00:38:06.840 [2024-12-13 03:49:07.756889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.840 [2024-12-13 03:49:07.756903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.840 qpair failed and we were unable to recover it. 00:38:06.840 [2024-12-13 03:49:07.757058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.840 [2024-12-13 03:49:07.757071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.840 qpair failed and we were unable to recover it. 00:38:06.840 [2024-12-13 03:49:07.757239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.840 [2024-12-13 03:49:07.757252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.840 qpair failed and we were unable to recover it. 00:38:06.840 [2024-12-13 03:49:07.757443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.840 [2024-12-13 03:49:07.757457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.840 qpair failed and we were unable to recover it. 00:38:06.840 [2024-12-13 03:49:07.757655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.840 [2024-12-13 03:49:07.757672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.840 qpair failed and we were unable to recover it. 00:38:06.840 [2024-12-13 03:49:07.757863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.840 [2024-12-13 03:49:07.757876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.840 qpair failed and we were unable to recover it. 00:38:06.840 [2024-12-13 03:49:07.757975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.840 [2024-12-13 03:49:07.757988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.840 qpair failed and we were unable to recover it. 00:38:06.840 [2024-12-13 03:49:07.758202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.840 [2024-12-13 03:49:07.758214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.840 qpair failed and we were unable to recover it. 00:38:06.840 [2024-12-13 03:49:07.758429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.840 [2024-12-13 03:49:07.758447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.840 qpair failed and we were unable to recover it. 00:38:06.840 [2024-12-13 03:49:07.758722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.840 [2024-12-13 03:49:07.758738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.840 qpair failed and we were unable to recover it. 00:38:06.840 [2024-12-13 03:49:07.758881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.840 [2024-12-13 03:49:07.758895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.840 qpair failed and we were unable to recover it. 00:38:06.840 [2024-12-13 03:49:07.759124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.840 [2024-12-13 03:49:07.759138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.840 qpair failed and we were unable to recover it. 00:38:06.840 [2024-12-13 03:49:07.759391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.840 [2024-12-13 03:49:07.759408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.840 qpair failed and we were unable to recover it. 00:38:06.840 [2024-12-13 03:49:07.759570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.840 [2024-12-13 03:49:07.759585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.840 qpair failed and we were unable to recover it. 00:38:06.840 [2024-12-13 03:49:07.759883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.840 [2024-12-13 03:49:07.759896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.840 qpair failed and we were unable to recover it. 00:38:06.840 [2024-12-13 03:49:07.760160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.840 [2024-12-13 03:49:07.760206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.840 qpair failed and we were unable to recover it. 00:38:06.840 [2024-12-13 03:49:07.760415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.840 [2024-12-13 03:49:07.760457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.840 qpair failed and we were unable to recover it. 00:38:06.840 [2024-12-13 03:49:07.760733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.840 [2024-12-13 03:49:07.760776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.840 qpair failed and we were unable to recover it. 00:38:06.840 [2024-12-13 03:49:07.761061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.840 [2024-12-13 03:49:07.761103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.840 qpair failed and we were unable to recover it. 00:38:06.840 [2024-12-13 03:49:07.761306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.840 [2024-12-13 03:49:07.761354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.840 qpair failed and we were unable to recover it. 00:38:06.840 [2024-12-13 03:49:07.761595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.761608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.841 [2024-12-13 03:49:07.761753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.761767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.841 [2024-12-13 03:49:07.761908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.761926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.841 [2024-12-13 03:49:07.762092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.762106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.841 [2024-12-13 03:49:07.762269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.762313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.841 [2024-12-13 03:49:07.762572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.762614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.841 [2024-12-13 03:49:07.762825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.762867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.841 [2024-12-13 03:49:07.763141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.763184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.841 [2024-12-13 03:49:07.763432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.763446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.841 [2024-12-13 03:49:07.763592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.763605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.841 [2024-12-13 03:49:07.763769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.763782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.841 [2024-12-13 03:49:07.763959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.763972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.841 [2024-12-13 03:49:07.764220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.764234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.841 [2024-12-13 03:49:07.764395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.764410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.841 [2024-12-13 03:49:07.764591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.764612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.841 [2024-12-13 03:49:07.764789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.764821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.841 [2024-12-13 03:49:07.765164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.765207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.841 [2024-12-13 03:49:07.765458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.765473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.841 [2024-12-13 03:49:07.765685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.765699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.841 [2024-12-13 03:49:07.765792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.765805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.841 [2024-12-13 03:49:07.766006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.766021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.841 [2024-12-13 03:49:07.766168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.766182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.841 [2024-12-13 03:49:07.766334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.766347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.841 [2024-12-13 03:49:07.766621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.766634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.841 [2024-12-13 03:49:07.766901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.766921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.841 [2024-12-13 03:49:07.767151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.767165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.841 [2024-12-13 03:49:07.767309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.767323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.841 [2024-12-13 03:49:07.767558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.767601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.841 [2024-12-13 03:49:07.767941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.767985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.841 [2024-12-13 03:49:07.768171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.768186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.841 [2024-12-13 03:49:07.768333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.768347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.841 [2024-12-13 03:49:07.768624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.768666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.841 [2024-12-13 03:49:07.768916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.768996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.841 [2024-12-13 03:49:07.769270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.769285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.841 [2024-12-13 03:49:07.769440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.769453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.841 [2024-12-13 03:49:07.769600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.769615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.841 [2024-12-13 03:49:07.769783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.769796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.841 [2024-12-13 03:49:07.769899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.841 [2024-12-13 03:49:07.769911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.841 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.770141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.770156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.770247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.770259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.770485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.770498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.770654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.770668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.770886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.770899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.771002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.771015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.771098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.771111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.771205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.771218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.771392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.771405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.771584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.771599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.771773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.771786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.771934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.771948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.772096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.772109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.772283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.772296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.772390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.772402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.772556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.772570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.772731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.772743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.772939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.772953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.773065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.773080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.773170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.773183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.773340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.773353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.773496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.773532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.773804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.773846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.774167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.774213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.774376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.774417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.774713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.774753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.774954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.775004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.775214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.775256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.775451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.775466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.775704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.775717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.775861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.775875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.776033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.776047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.776127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.776139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.776214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.776225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.776379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.776392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.776530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.776543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.776765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.776796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.842 [2024-12-13 03:49:07.776969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.842 [2024-12-13 03:49:07.777014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.842 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.777209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.777250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.777558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.777573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.777805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.777823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.777906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.777924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.778081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.778094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.778293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.778308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.778444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.778457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.778615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.778627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.778773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.778788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.778957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.778971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.779116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.779129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.779329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.779343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.779598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.779611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.779782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.779796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.780003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.780017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.780105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.780117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.780216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.780228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.780323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.780335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.780472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.780486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.780633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.780647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.780791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.780805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.781028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.781071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.781349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.781391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.781539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.781581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.781774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.781815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.782072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.782116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.782372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.782385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.782595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.782638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.782914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.782972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.783158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.783172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.783355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.783397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.783538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.783579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.783851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.783893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.784128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.784173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.784389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.784429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.784699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.784712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.784816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.784840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.784990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.785004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.843 [2024-12-13 03:49:07.785173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.843 [2024-12-13 03:49:07.785186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.843 qpair failed and we were unable to recover it. 00:38:06.844 [2024-12-13 03:49:07.785433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.844 [2024-12-13 03:49:07.785446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.844 qpair failed and we were unable to recover it. 00:38:06.844 [2024-12-13 03:49:07.785651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.844 [2024-12-13 03:49:07.785664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.844 qpair failed and we were unable to recover it. 00:38:06.844 [2024-12-13 03:49:07.785832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.844 [2024-12-13 03:49:07.785846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.844 qpair failed and we were unable to recover it. 00:38:06.844 [2024-12-13 03:49:07.786061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.844 [2024-12-13 03:49:07.786075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.844 qpair failed and we were unable to recover it. 00:38:06.844 [2024-12-13 03:49:07.786211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.844 [2024-12-13 03:49:07.786249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.844 qpair failed and we were unable to recover it. 00:38:06.844 [2024-12-13 03:49:07.786529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.844 [2024-12-13 03:49:07.786570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.844 qpair failed and we were unable to recover it. 00:38:06.844 [2024-12-13 03:49:07.786829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.844 [2024-12-13 03:49:07.786869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.844 qpair failed and we were unable to recover it. 00:38:06.844 [2024-12-13 03:49:07.787154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.844 [2024-12-13 03:49:07.787197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.844 qpair failed and we were unable to recover it. 00:38:06.844 [2024-12-13 03:49:07.787477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.844 [2024-12-13 03:49:07.787531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.844 qpair failed and we were unable to recover it. 00:38:06.844 [2024-12-13 03:49:07.787683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.844 [2024-12-13 03:49:07.787696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.844 qpair failed and we were unable to recover it. 00:38:06.844 [2024-12-13 03:49:07.787866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.844 [2024-12-13 03:49:07.787916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.844 qpair failed and we were unable to recover it. 00:38:06.844 [2024-12-13 03:49:07.788143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.844 [2024-12-13 03:49:07.788187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.844 qpair failed and we were unable to recover it. 00:38:06.844 [2024-12-13 03:49:07.788383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.844 [2024-12-13 03:49:07.788425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.844 qpair failed and we were unable to recover it. 00:38:06.844 [2024-12-13 03:49:07.788678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.844 [2024-12-13 03:49:07.788692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.844 qpair failed and we were unable to recover it. 00:38:06.844 [2024-12-13 03:49:07.788911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.844 [2024-12-13 03:49:07.788935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.844 qpair failed and we were unable to recover it. 00:38:06.844 [2024-12-13 03:49:07.789020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.844 [2024-12-13 03:49:07.789033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.844 qpair failed and we were unable to recover it. 00:38:06.844 [2024-12-13 03:49:07.789169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.844 [2024-12-13 03:49:07.789183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.844 qpair failed and we were unable to recover it. 00:38:06.844 [2024-12-13 03:49:07.789344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.844 [2024-12-13 03:49:07.789357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.844 qpair failed and we were unable to recover it. 00:38:06.844 [2024-12-13 03:49:07.789508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.844 [2024-12-13 03:49:07.789521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.844 qpair failed and we were unable to recover it. 00:38:06.844 [2024-12-13 03:49:07.789590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.844 [2024-12-13 03:49:07.789602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.844 qpair failed and we were unable to recover it. 00:38:06.844 [2024-12-13 03:49:07.789740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.844 [2024-12-13 03:49:07.789753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.844 qpair failed and we were unable to recover it. 00:38:06.844 [2024-12-13 03:49:07.789952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.844 [2024-12-13 03:49:07.789966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.844 qpair failed and we were unable to recover it. 00:38:06.844 [2024-12-13 03:49:07.790177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.844 [2024-12-13 03:49:07.790190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.844 qpair failed and we were unable to recover it. 00:38:06.844 [2024-12-13 03:49:07.790337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.844 [2024-12-13 03:49:07.790350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.844 qpair failed and we were unable to recover it. 00:38:06.844 [2024-12-13 03:49:07.790482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.844 [2024-12-13 03:49:07.790496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.844 qpair failed and we were unable to recover it. 00:38:06.844 [2024-12-13 03:49:07.790717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.844 [2024-12-13 03:49:07.790730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.844 qpair failed and we were unable to recover it. 00:38:06.844 [2024-12-13 03:49:07.790804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.844 [2024-12-13 03:49:07.790817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.844 qpair failed and we were unable to recover it. 00:38:06.844 [2024-12-13 03:49:07.790982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.844 [2024-12-13 03:49:07.791006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.844 qpair failed and we were unable to recover it. 00:38:06.844 [2024-12-13 03:49:07.791141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.844 [2024-12-13 03:49:07.791155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.844 qpair failed and we were unable to recover it. 00:38:06.844 [2024-12-13 03:49:07.791238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.844 [2024-12-13 03:49:07.791253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.844 qpair failed and we were unable to recover it. 00:38:06.844 [2024-12-13 03:49:07.791497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.844 [2024-12-13 03:49:07.791512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.844 qpair failed and we were unable to recover it. 00:38:06.844 [2024-12-13 03:49:07.791654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.844 [2024-12-13 03:49:07.791668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.844 qpair failed and we were unable to recover it. 00:38:06.844 [2024-12-13 03:49:07.791901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.844 [2024-12-13 03:49:07.791954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.844 qpair failed and we were unable to recover it. 00:38:06.844 [2024-12-13 03:49:07.792167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.844 [2024-12-13 03:49:07.792209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.844 qpair failed and we were unable to recover it. 00:38:06.844 [2024-12-13 03:49:07.792465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.844 [2024-12-13 03:49:07.792506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.844 qpair failed and we were unable to recover it. 00:38:06.844 [2024-12-13 03:49:07.792786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.844 [2024-12-13 03:49:07.792831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.844 qpair failed and we were unable to recover it. 00:38:06.844 [2024-12-13 03:49:07.793036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.844 [2024-12-13 03:49:07.793077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.844 qpair failed and we were unable to recover it. 00:38:06.845 [2024-12-13 03:49:07.793305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.845 [2024-12-13 03:49:07.793318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.845 qpair failed and we were unable to recover it. 00:38:06.845 [2024-12-13 03:49:07.793515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.845 [2024-12-13 03:49:07.793529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.845 qpair failed and we were unable to recover it. 00:38:06.845 [2024-12-13 03:49:07.793792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.845 [2024-12-13 03:49:07.793806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.845 qpair failed and we were unable to recover it. 00:38:06.845 [2024-12-13 03:49:07.794056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.845 [2024-12-13 03:49:07.794070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.845 qpair failed and we were unable to recover it. 00:38:06.845 [2024-12-13 03:49:07.794274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.845 [2024-12-13 03:49:07.794288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.845 qpair failed and we were unable to recover it. 00:38:06.845 [2024-12-13 03:49:07.794389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.845 [2024-12-13 03:49:07.794401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.845 qpair failed and we were unable to recover it. 00:38:06.845 [2024-12-13 03:49:07.794605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.845 [2024-12-13 03:49:07.794619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.845 qpair failed and we were unable to recover it. 00:38:06.845 [2024-12-13 03:49:07.794783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.845 [2024-12-13 03:49:07.794798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.845 qpair failed and we were unable to recover it. 00:38:06.845 [2024-12-13 03:49:07.794933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.845 [2024-12-13 03:49:07.794947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.845 qpair failed and we were unable to recover it. 00:38:06.845 [2024-12-13 03:49:07.795084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.845 [2024-12-13 03:49:07.795097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.845 qpair failed and we were unable to recover it. 00:38:06.845 [2024-12-13 03:49:07.795258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.845 [2024-12-13 03:49:07.795272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.845 qpair failed and we were unable to recover it. 00:38:06.845 [2024-12-13 03:49:07.795366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.845 [2024-12-13 03:49:07.795380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.845 qpair failed and we were unable to recover it. 00:38:06.845 [2024-12-13 03:49:07.795587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.845 [2024-12-13 03:49:07.795600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.845 qpair failed and we were unable to recover it. 00:38:06.845 [2024-12-13 03:49:07.795703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.845 [2024-12-13 03:49:07.795716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.845 qpair failed and we were unable to recover it. 00:38:06.845 [2024-12-13 03:49:07.795941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.845 [2024-12-13 03:49:07.795954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.845 qpair failed and we were unable to recover it. 00:38:06.845 [2024-12-13 03:49:07.796120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.845 [2024-12-13 03:49:07.796134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.845 qpair failed and we were unable to recover it. 00:38:06.845 [2024-12-13 03:49:07.796271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.845 [2024-12-13 03:49:07.796285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.845 qpair failed and we were unable to recover it. 00:38:06.845 [2024-12-13 03:49:07.796475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.845 [2024-12-13 03:49:07.796488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.845 qpair failed and we were unable to recover it. 00:38:06.845 [2024-12-13 03:49:07.796553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.845 [2024-12-13 03:49:07.796565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.845 qpair failed and we were unable to recover it. 00:38:06.845 [2024-12-13 03:49:07.796729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.845 [2024-12-13 03:49:07.796743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.845 qpair failed and we were unable to recover it. 00:38:06.845 [2024-12-13 03:49:07.796848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.845 [2024-12-13 03:49:07.796861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.845 qpair failed and we were unable to recover it. 00:38:06.845 [2024-12-13 03:49:07.797005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.845 [2024-12-13 03:49:07.797019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.845 qpair failed and we were unable to recover it. 00:38:06.845 [2024-12-13 03:49:07.797117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.845 [2024-12-13 03:49:07.797130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.845 qpair failed and we were unable to recover it. 00:38:06.845 [2024-12-13 03:49:07.797285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.845 [2024-12-13 03:49:07.797298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.845 qpair failed and we were unable to recover it. 00:38:06.845 [2024-12-13 03:49:07.797441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.845 [2024-12-13 03:49:07.797455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.845 qpair failed and we were unable to recover it. 00:38:06.845 [2024-12-13 03:49:07.797532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.845 [2024-12-13 03:49:07.797545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.845 qpair failed and we were unable to recover it. 00:38:06.845 [2024-12-13 03:49:07.797761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.845 [2024-12-13 03:49:07.797774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.845 qpair failed and we were unable to recover it. 00:38:06.845 [2024-12-13 03:49:07.797939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.845 [2024-12-13 03:49:07.797952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.845 qpair failed and we were unable to recover it. 00:38:06.845 [2024-12-13 03:49:07.798035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.845 [2024-12-13 03:49:07.798049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.845 qpair failed and we were unable to recover it. 00:38:06.845 [2024-12-13 03:49:07.798275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.845 [2024-12-13 03:49:07.798288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.845 qpair failed and we were unable to recover it. 00:38:06.845 [2024-12-13 03:49:07.798380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.845 [2024-12-13 03:49:07.798393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.798530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.798543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.798769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.798785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.798880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.798893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.799045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.799059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.799283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.799297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.799530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.799543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.799763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.799776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.799932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.799945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.800154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.800168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.800318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.800332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.800565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.800578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.800680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.800692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.800863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.800877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.800987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.801000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.801170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.801183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.801285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.801298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.801434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.801448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.801612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.801625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.801772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.801824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.801963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.802006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.802151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.802194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.802416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.802458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.802673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.802687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.802820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.802843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.802983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.802997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.803097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.803111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.803206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.803219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.803289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.803301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.803438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.803484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.803652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.803698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.803884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.803908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.804095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.804119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.804293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.804352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.804508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.804552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.804759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.804803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.805067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.805112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.805243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.805285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.805424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.846 [2024-12-13 03:49:07.805466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.846 qpair failed and we were unable to recover it. 00:38:06.846 [2024-12-13 03:49:07.805588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.847 [2024-12-13 03:49:07.805610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.847 qpair failed and we were unable to recover it. 00:38:06.847 [2024-12-13 03:49:07.805829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.847 [2024-12-13 03:49:07.805850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.847 qpair failed and we were unable to recover it. 00:38:06.847 [2024-12-13 03:49:07.805943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.847 [2024-12-13 03:49:07.805963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.847 qpair failed and we were unable to recover it. 00:38:06.847 [2024-12-13 03:49:07.806206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.847 [2024-12-13 03:49:07.806233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.847 qpair failed and we were unable to recover it. 00:38:06.847 [2024-12-13 03:49:07.806458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.847 [2024-12-13 03:49:07.806480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.847 qpair failed and we were unable to recover it. 00:38:06.847 [2024-12-13 03:49:07.806567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.847 [2024-12-13 03:49:07.806583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.847 qpair failed and we were unable to recover it. 00:38:06.847 [2024-12-13 03:49:07.806685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.847 [2024-12-13 03:49:07.806698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.847 qpair failed and we were unable to recover it. 00:38:06.847 [2024-12-13 03:49:07.806770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.847 [2024-12-13 03:49:07.806783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.847 qpair failed and we were unable to recover it. 00:38:06.847 [2024-12-13 03:49:07.806864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.847 [2024-12-13 03:49:07.806876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.847 qpair failed and we were unable to recover it. 00:38:06.847 [2024-12-13 03:49:07.807102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.847 [2024-12-13 03:49:07.807116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.847 qpair failed and we were unable to recover it. 00:38:06.847 [2024-12-13 03:49:07.807211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.847 [2024-12-13 03:49:07.807224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.847 qpair failed and we were unable to recover it. 00:38:06.847 [2024-12-13 03:49:07.807306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.847 [2024-12-13 03:49:07.807319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.847 qpair failed and we were unable to recover it. 00:38:06.847 [2024-12-13 03:49:07.807506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.847 [2024-12-13 03:49:07.807547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.847 qpair failed and we were unable to recover it. 00:38:06.847 [2024-12-13 03:49:07.807752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.847 [2024-12-13 03:49:07.807793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.847 qpair failed and we were unable to recover it. 00:38:06.847 [2024-12-13 03:49:07.808009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.847 [2024-12-13 03:49:07.808051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.847 qpair failed and we were unable to recover it. 00:38:06.847 [2024-12-13 03:49:07.808193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.847 [2024-12-13 03:49:07.808206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.847 qpair failed and we were unable to recover it. 00:38:06.847 [2024-12-13 03:49:07.808344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.847 [2024-12-13 03:49:07.808357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.847 qpair failed and we were unable to recover it. 00:38:06.847 [2024-12-13 03:49:07.808438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.847 [2024-12-13 03:49:07.808451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.847 qpair failed and we were unable to recover it. 00:38:06.847 [2024-12-13 03:49:07.808598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.847 [2024-12-13 03:49:07.808612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.847 qpair failed and we were unable to recover it. 00:38:06.847 [2024-12-13 03:49:07.808755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.847 [2024-12-13 03:49:07.808767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.847 qpair failed and we were unable to recover it. 00:38:06.847 [2024-12-13 03:49:07.808994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.847 [2024-12-13 03:49:07.809010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.847 qpair failed and we were unable to recover it. 00:38:06.847 [2024-12-13 03:49:07.809158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.847 [2024-12-13 03:49:07.809171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.847 qpair failed and we were unable to recover it. 00:38:06.847 [2024-12-13 03:49:07.809343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.847 [2024-12-13 03:49:07.809356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.847 qpair failed and we were unable to recover it. 00:38:06.847 [2024-12-13 03:49:07.809438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.847 [2024-12-13 03:49:07.809452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.847 qpair failed and we were unable to recover it. 00:38:06.847 [2024-12-13 03:49:07.809693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.847 [2024-12-13 03:49:07.809706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.847 qpair failed and we were unable to recover it. 00:38:06.847 [2024-12-13 03:49:07.809788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.847 [2024-12-13 03:49:07.809800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.847 qpair failed and we were unable to recover it. 00:38:06.847 [2024-12-13 03:49:07.809947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.847 [2024-12-13 03:49:07.809961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.847 qpair failed and we were unable to recover it. 00:38:06.847 [2024-12-13 03:49:07.810178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.847 [2024-12-13 03:49:07.810191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.847 qpair failed and we were unable to recover it. 00:38:06.847 [2024-12-13 03:49:07.810275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.847 [2024-12-13 03:49:07.810289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.847 qpair failed and we were unable to recover it. 00:38:06.847 [2024-12-13 03:49:07.810434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.847 [2024-12-13 03:49:07.810447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.847 qpair failed and we were unable to recover it. 00:38:06.847 [2024-12-13 03:49:07.810559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.847 [2024-12-13 03:49:07.810588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.847 qpair failed and we were unable to recover it. 00:38:06.847 [2024-12-13 03:49:07.810705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.847 [2024-12-13 03:49:07.810727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.847 qpair failed and we were unable to recover it. 00:38:06.847 [2024-12-13 03:49:07.810891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.847 [2024-12-13 03:49:07.810953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.847 qpair failed and we were unable to recover it. 00:38:06.847 [2024-12-13 03:49:07.811192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.847 [2024-12-13 03:49:07.811235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.847 qpair failed and we were unable to recover it. 00:38:06.847 [2024-12-13 03:49:07.811381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.847 [2024-12-13 03:49:07.811422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.847 qpair failed and we were unable to recover it. 00:38:06.847 [2024-12-13 03:49:07.811633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.847 [2024-12-13 03:49:07.811653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.847 qpair failed and we were unable to recover it. 00:38:06.847 [2024-12-13 03:49:07.811826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.847 [2024-12-13 03:49:07.811842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.848 qpair failed and we were unable to recover it. 00:38:06.848 [2024-12-13 03:49:07.811935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.848 [2024-12-13 03:49:07.811949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.848 qpair failed and we were unable to recover it. 00:38:06.848 [2024-12-13 03:49:07.812028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.848 [2024-12-13 03:49:07.812040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.848 qpair failed and we were unable to recover it. 00:38:06.848 [2024-12-13 03:49:07.812172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.848 [2024-12-13 03:49:07.812187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.848 qpair failed and we were unable to recover it. 00:38:06.848 [2024-12-13 03:49:07.812341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.848 [2024-12-13 03:49:07.812354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.848 qpair failed and we were unable to recover it. 00:38:06.848 [2024-12-13 03:49:07.812490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.848 [2024-12-13 03:49:07.812504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.848 qpair failed and we were unable to recover it. 00:38:06.848 [2024-12-13 03:49:07.812659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.848 [2024-12-13 03:49:07.812673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.848 qpair failed and we were unable to recover it. 00:38:06.848 [2024-12-13 03:49:07.812864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.848 [2024-12-13 03:49:07.812880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.848 qpair failed and we were unable to recover it. 00:38:06.848 [2024-12-13 03:49:07.813095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.848 [2024-12-13 03:49:07.813138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.848 qpair failed and we were unable to recover it. 00:38:06.848 [2024-12-13 03:49:07.813343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.848 [2024-12-13 03:49:07.813386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.848 qpair failed and we were unable to recover it. 00:38:06.848 [2024-12-13 03:49:07.813584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.848 [2024-12-13 03:49:07.813629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.848 qpair failed and we were unable to recover it. 00:38:06.848 [2024-12-13 03:49:07.813716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.848 [2024-12-13 03:49:07.813729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.848 qpair failed and we were unable to recover it. 00:38:06.848 [2024-12-13 03:49:07.813928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.848 [2024-12-13 03:49:07.813943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.848 qpair failed and we were unable to recover it. 00:38:06.848 [2024-12-13 03:49:07.814106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.848 [2024-12-13 03:49:07.814119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.848 qpair failed and we were unable to recover it. 00:38:06.848 [2024-12-13 03:49:07.814273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.848 [2024-12-13 03:49:07.814286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.848 qpair failed and we were unable to recover it. 00:38:06.848 [2024-12-13 03:49:07.814356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.848 [2024-12-13 03:49:07.814368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.848 qpair failed and we were unable to recover it. 00:38:06.848 [2024-12-13 03:49:07.814521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.848 [2024-12-13 03:49:07.814534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.848 qpair failed and we were unable to recover it. 00:38:06.848 [2024-12-13 03:49:07.814695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.848 [2024-12-13 03:49:07.814709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.848 qpair failed and we were unable to recover it. 00:38:06.848 [2024-12-13 03:49:07.814872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.848 [2024-12-13 03:49:07.814913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.848 qpair failed and we were unable to recover it. 00:38:06.848 [2024-12-13 03:49:07.815208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.848 [2024-12-13 03:49:07.815250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.848 qpair failed and we were unable to recover it. 00:38:06.848 [2024-12-13 03:49:07.815378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.848 [2024-12-13 03:49:07.815392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.848 qpair failed and we were unable to recover it. 00:38:06.848 [2024-12-13 03:49:07.815545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.848 [2024-12-13 03:49:07.815560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.848 qpair failed and we were unable to recover it. 00:38:06.848 [2024-12-13 03:49:07.815749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.848 [2024-12-13 03:49:07.815763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.848 qpair failed and we were unable to recover it. 00:38:06.848 [2024-12-13 03:49:07.815914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.848 [2024-12-13 03:49:07.815935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.848 qpair failed and we were unable to recover it. 00:38:06.848 [2024-12-13 03:49:07.816013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.848 [2024-12-13 03:49:07.816025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.848 qpair failed and we were unable to recover it. 00:38:06.848 [2024-12-13 03:49:07.816123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.848 [2024-12-13 03:49:07.816136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.848 qpair failed and we were unable to recover it. 00:38:06.848 [2024-12-13 03:49:07.816271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.848 [2024-12-13 03:49:07.816283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.848 qpair failed and we were unable to recover it. 00:38:06.848 [2024-12-13 03:49:07.816360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.848 [2024-12-13 03:49:07.816373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.848 qpair failed and we were unable to recover it. 00:38:06.848 [2024-12-13 03:49:07.816443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.848 [2024-12-13 03:49:07.816455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.848 qpair failed and we were unable to recover it. 00:38:06.848 [2024-12-13 03:49:07.816603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.848 [2024-12-13 03:49:07.816616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.848 qpair failed and we were unable to recover it. 00:38:06.848 [2024-12-13 03:49:07.816769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.848 [2024-12-13 03:49:07.816783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.848 qpair failed and we were unable to recover it. 00:38:06.848 [2024-12-13 03:49:07.816856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.848 [2024-12-13 03:49:07.816868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.848 qpair failed and we were unable to recover it. 00:38:06.848 [2024-12-13 03:49:07.816952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.848 [2024-12-13 03:49:07.816966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.848 qpair failed and we were unable to recover it. 00:38:06.848 [2024-12-13 03:49:07.817068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.848 [2024-12-13 03:49:07.817082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.848 qpair failed and we were unable to recover it. 00:38:06.848 [2024-12-13 03:49:07.817217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.848 [2024-12-13 03:49:07.817239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.848 qpair failed and we were unable to recover it. 00:38:06.848 [2024-12-13 03:49:07.817387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.848 [2024-12-13 03:49:07.817401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.849 qpair failed and we were unable to recover it. 00:38:06.849 [2024-12-13 03:49:07.817477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.849 [2024-12-13 03:49:07.817490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.849 qpair failed and we were unable to recover it. 00:38:06.849 [2024-12-13 03:49:07.817626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.849 [2024-12-13 03:49:07.817639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.849 qpair failed and we were unable to recover it. 00:38:06.849 [2024-12-13 03:49:07.817867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.849 [2024-12-13 03:49:07.817908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.849 qpair failed and we were unable to recover it. 00:38:06.849 [2024-12-13 03:49:07.818126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.849 [2024-12-13 03:49:07.818168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.849 qpair failed and we were unable to recover it. 00:38:06.849 [2024-12-13 03:49:07.818381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.849 [2024-12-13 03:49:07.818421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.849 qpair failed and we were unable to recover it. 00:38:06.849 [2024-12-13 03:49:07.818631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.849 [2024-12-13 03:49:07.818672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.849 qpair failed and we were unable to recover it. 00:38:06.849 [2024-12-13 03:49:07.818826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.849 [2024-12-13 03:49:07.818866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.849 qpair failed and we were unable to recover it. 00:38:06.849 [2024-12-13 03:49:07.819032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.849 [2024-12-13 03:49:07.819075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.849 qpair failed and we were unable to recover it. 00:38:06.849 [2024-12-13 03:49:07.819209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.849 [2024-12-13 03:49:07.819250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.849 qpair failed and we were unable to recover it. 00:38:06.849 [2024-12-13 03:49:07.819527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.849 [2024-12-13 03:49:07.819571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.849 qpair failed and we were unable to recover it. 00:38:06.849 [2024-12-13 03:49:07.819728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.849 [2024-12-13 03:49:07.819741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.849 qpair failed and we were unable to recover it. 00:38:06.849 [2024-12-13 03:49:07.819824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.849 [2024-12-13 03:49:07.819838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.849 qpair failed and we were unable to recover it. 00:38:06.849 [2024-12-13 03:49:07.820058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.849 [2024-12-13 03:49:07.820071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.849 qpair failed and we were unable to recover it. 00:38:06.849 [2024-12-13 03:49:07.820150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.849 [2024-12-13 03:49:07.820164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.849 qpair failed and we were unable to recover it. 00:38:06.849 [2024-12-13 03:49:07.820376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.849 [2024-12-13 03:49:07.820390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.849 qpair failed and we were unable to recover it. 00:38:06.849 [2024-12-13 03:49:07.820476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.849 [2024-12-13 03:49:07.820490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.849 qpair failed and we were unable to recover it. 00:38:06.849 [2024-12-13 03:49:07.820636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.849 [2024-12-13 03:49:07.820650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.849 qpair failed and we were unable to recover it. 00:38:06.849 [2024-12-13 03:49:07.820730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.849 [2024-12-13 03:49:07.820742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.849 qpair failed and we were unable to recover it. 00:38:06.849 [2024-12-13 03:49:07.820953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.849 [2024-12-13 03:49:07.820967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.849 qpair failed and we were unable to recover it. 00:38:06.849 [2024-12-13 03:49:07.821068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.849 [2024-12-13 03:49:07.821081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.849 qpair failed and we were unable to recover it. 00:38:06.849 [2024-12-13 03:49:07.821160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.849 [2024-12-13 03:49:07.821172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.849 qpair failed and we were unable to recover it. 00:38:06.849 [2024-12-13 03:49:07.821309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.849 [2024-12-13 03:49:07.821321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.849 qpair failed and we were unable to recover it. 00:38:06.849 [2024-12-13 03:49:07.821391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.849 [2024-12-13 03:49:07.821404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.849 qpair failed and we were unable to recover it. 00:38:06.849 [2024-12-13 03:49:07.821614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.849 [2024-12-13 03:49:07.821628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.849 qpair failed and we were unable to recover it. 00:38:06.849 [2024-12-13 03:49:07.821763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.849 [2024-12-13 03:49:07.821810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.849 qpair failed and we were unable to recover it. 00:38:06.849 [2024-12-13 03:49:07.822010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.849 [2024-12-13 03:49:07.822051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.849 qpair failed and we were unable to recover it. 00:38:06.849 [2024-12-13 03:49:07.822300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.849 [2024-12-13 03:49:07.822343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.849 qpair failed and we were unable to recover it. 00:38:06.849 [2024-12-13 03:49:07.822554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.849 [2024-12-13 03:49:07.822568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.849 qpair failed and we were unable to recover it. 00:38:06.849 [2024-12-13 03:49:07.822675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.849 [2024-12-13 03:49:07.822689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.849 qpair failed and we were unable to recover it. 00:38:06.849 [2024-12-13 03:49:07.822867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.849 [2024-12-13 03:49:07.822880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.849 qpair failed and we were unable to recover it. 00:38:06.849 [2024-12-13 03:49:07.823018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.849 [2024-12-13 03:49:07.823032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.849 qpair failed and we were unable to recover it. 00:38:06.849 [2024-12-13 03:49:07.823126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.849 [2024-12-13 03:49:07.823139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.849 qpair failed and we were unable to recover it. 00:38:06.849 [2024-12-13 03:49:07.823282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.850 [2024-12-13 03:49:07.823295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.850 qpair failed and we were unable to recover it. 00:38:06.850 [2024-12-13 03:49:07.823380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.850 [2024-12-13 03:49:07.823392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.850 qpair failed and we were unable to recover it. 00:38:06.850 [2024-12-13 03:49:07.823539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.850 [2024-12-13 03:49:07.823552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.850 qpair failed and we were unable to recover it. 00:38:06.850 [2024-12-13 03:49:07.823645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.850 [2024-12-13 03:49:07.823659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.850 qpair failed and we were unable to recover it. 00:38:06.850 [2024-12-13 03:49:07.823830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.850 [2024-12-13 03:49:07.823843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.850 qpair failed and we were unable to recover it. 00:38:06.850 [2024-12-13 03:49:07.823942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.850 [2024-12-13 03:49:07.823955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.850 qpair failed and we were unable to recover it. 00:38:06.850 [2024-12-13 03:49:07.824157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.850 [2024-12-13 03:49:07.824173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.850 qpair failed and we were unable to recover it. 00:38:06.850 [2024-12-13 03:49:07.824310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.850 [2024-12-13 03:49:07.824323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.850 qpair failed and we were unable to recover it. 00:38:06.850 [2024-12-13 03:49:07.824402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.850 [2024-12-13 03:49:07.824415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.850 qpair failed and we were unable to recover it. 00:38:06.850 [2024-12-13 03:49:07.824612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.850 [2024-12-13 03:49:07.824626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.850 qpair failed and we were unable to recover it. 00:38:06.850 [2024-12-13 03:49:07.824773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.850 [2024-12-13 03:49:07.824786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.850 qpair failed and we were unable to recover it. 00:38:06.850 [2024-12-13 03:49:07.824878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.850 [2024-12-13 03:49:07.824892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.850 qpair failed and we were unable to recover it. 00:38:06.850 [2024-12-13 03:49:07.825164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.850 [2024-12-13 03:49:07.825179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.850 qpair failed and we were unable to recover it. 00:38:06.850 [2024-12-13 03:49:07.825439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.850 [2024-12-13 03:49:07.825452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.850 qpair failed and we were unable to recover it. 00:38:06.850 [2024-12-13 03:49:07.825611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.850 [2024-12-13 03:49:07.825623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.850 qpair failed and we were unable to recover it. 00:38:06.850 [2024-12-13 03:49:07.825705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.850 [2024-12-13 03:49:07.825718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.850 qpair failed and we were unable to recover it. 00:38:06.850 [2024-12-13 03:49:07.825923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.850 [2024-12-13 03:49:07.825937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.850 qpair failed and we were unable to recover it. 00:38:06.850 [2024-12-13 03:49:07.826023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.850 [2024-12-13 03:49:07.826036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.850 qpair failed and we were unable to recover it. 00:38:06.850 [2024-12-13 03:49:07.826135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.850 [2024-12-13 03:49:07.826149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.850 qpair failed and we were unable to recover it. 00:38:06.850 [2024-12-13 03:49:07.826286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.850 [2024-12-13 03:49:07.826299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.850 qpair failed and we were unable to recover it. 00:38:06.850 [2024-12-13 03:49:07.826398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.850 [2024-12-13 03:49:07.826412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.850 qpair failed and we were unable to recover it. 00:38:06.850 [2024-12-13 03:49:07.826556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.850 [2024-12-13 03:49:07.826570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.850 qpair failed and we were unable to recover it. 00:38:06.850 [2024-12-13 03:49:07.826651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.850 [2024-12-13 03:49:07.826664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.850 qpair failed and we were unable to recover it. 00:38:06.850 [2024-12-13 03:49:07.826744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.850 [2024-12-13 03:49:07.826757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.850 qpair failed and we were unable to recover it. 00:38:06.850 [2024-12-13 03:49:07.826845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.850 [2024-12-13 03:49:07.826859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.850 qpair failed and we were unable to recover it. 00:38:06.850 [2024-12-13 03:49:07.827007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.850 [2024-12-13 03:49:07.827021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.850 qpair failed and we were unable to recover it. 00:38:06.850 [2024-12-13 03:49:07.827121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.850 [2024-12-13 03:49:07.827134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.850 qpair failed and we were unable to recover it. 00:38:06.850 [2024-12-13 03:49:07.827213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.850 [2024-12-13 03:49:07.827225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.850 qpair failed and we were unable to recover it. 00:38:06.850 [2024-12-13 03:49:07.827304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.850 [2024-12-13 03:49:07.827317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.850 qpair failed and we were unable to recover it. 00:38:06.850 [2024-12-13 03:49:07.827403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.850 [2024-12-13 03:49:07.827417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.850 qpair failed and we were unable to recover it. 00:38:06.850 [2024-12-13 03:49:07.827497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.850 [2024-12-13 03:49:07.827518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.850 qpair failed and we were unable to recover it. 00:38:06.850 [2024-12-13 03:49:07.827585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.850 [2024-12-13 03:49:07.827597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.850 qpair failed and we were unable to recover it. 00:38:06.850 [2024-12-13 03:49:07.827665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.850 [2024-12-13 03:49:07.827678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.850 qpair failed and we were unable to recover it. 00:38:06.850 [2024-12-13 03:49:07.827749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.850 [2024-12-13 03:49:07.827762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.850 qpair failed and we were unable to recover it. 00:38:06.850 [2024-12-13 03:49:07.827850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.850 [2024-12-13 03:49:07.827861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.850 qpair failed and we were unable to recover it. 00:38:06.850 [2024-12-13 03:49:07.827942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.850 [2024-12-13 03:49:07.827956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.850 qpair failed and we were unable to recover it. 00:38:06.850 [2024-12-13 03:49:07.828191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.850 [2024-12-13 03:49:07.828206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.850 qpair failed and we were unable to recover it. 00:38:06.850 [2024-12-13 03:49:07.828302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.851 [2024-12-13 03:49:07.828315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.851 qpair failed and we were unable to recover it. 00:38:06.851 [2024-12-13 03:49:07.828396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.851 [2024-12-13 03:49:07.828408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.851 qpair failed and we were unable to recover it. 00:38:06.851 [2024-12-13 03:49:07.828507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.851 [2024-12-13 03:49:07.828520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.851 qpair failed and we were unable to recover it. 00:38:06.851 [2024-12-13 03:49:07.828595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.851 [2024-12-13 03:49:07.828608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.851 qpair failed and we were unable to recover it. 00:38:06.851 [2024-12-13 03:49:07.828737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.851 [2024-12-13 03:49:07.828750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.851 qpair failed and we were unable to recover it. 00:38:06.851 [2024-12-13 03:49:07.828833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.851 [2024-12-13 03:49:07.828846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.851 qpair failed and we were unable to recover it. 00:38:06.851 [2024-12-13 03:49:07.828933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.851 [2024-12-13 03:49:07.828946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.851 qpair failed and we were unable to recover it. 00:38:06.851 [2024-12-13 03:49:07.829024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.851 [2024-12-13 03:49:07.829037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.851 qpair failed and we were unable to recover it. 00:38:06.851 [2024-12-13 03:49:07.829108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.851 [2024-12-13 03:49:07.829120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.851 qpair failed and we were unable to recover it. 00:38:06.851 [2024-12-13 03:49:07.829291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.851 [2024-12-13 03:49:07.829307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.851 qpair failed and we were unable to recover it. 00:38:06.851 [2024-12-13 03:49:07.829374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.851 [2024-12-13 03:49:07.829386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.851 qpair failed and we were unable to recover it. 00:38:06.851 [2024-12-13 03:49:07.829537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.851 [2024-12-13 03:49:07.829551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.851 qpair failed and we were unable to recover it. 00:38:06.851 [2024-12-13 03:49:07.829685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.851 [2024-12-13 03:49:07.829698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.851 qpair failed and we were unable to recover it. 00:38:06.851 [2024-12-13 03:49:07.829936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.851 [2024-12-13 03:49:07.829950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.851 qpair failed and we were unable to recover it. 00:38:06.851 [2024-12-13 03:49:07.830112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.851 [2024-12-13 03:49:07.830126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.851 qpair failed and we were unable to recover it. 00:38:06.851 [2024-12-13 03:49:07.830346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.851 [2024-12-13 03:49:07.830359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.851 qpair failed and we were unable to recover it. 00:38:06.851 [2024-12-13 03:49:07.830546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.851 [2024-12-13 03:49:07.830560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.851 qpair failed and we were unable to recover it. 00:38:06.851 [2024-12-13 03:49:07.830726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.851 [2024-12-13 03:49:07.830739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.851 qpair failed and we were unable to recover it. 00:38:06.851 [2024-12-13 03:49:07.830884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.851 [2024-12-13 03:49:07.830899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.851 qpair failed and we were unable to recover it. 00:38:06.851 [2024-12-13 03:49:07.830992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.851 [2024-12-13 03:49:07.831007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.851 qpair failed and we were unable to recover it. 00:38:06.851 [2024-12-13 03:49:07.831093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.851 [2024-12-13 03:49:07.831106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.851 qpair failed and we were unable to recover it. 00:38:06.851 [2024-12-13 03:49:07.831255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.851 [2024-12-13 03:49:07.831268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.851 qpair failed and we were unable to recover it. 00:38:06.851 [2024-12-13 03:49:07.831492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.851 [2024-12-13 03:49:07.831505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.851 qpair failed and we were unable to recover it. 00:38:06.851 [2024-12-13 03:49:07.831594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.851 [2024-12-13 03:49:07.831607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.851 qpair failed and we were unable to recover it. 00:38:06.851 [2024-12-13 03:49:07.831698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.851 [2024-12-13 03:49:07.831712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.851 qpair failed and we were unable to recover it. 00:38:06.851 [2024-12-13 03:49:07.831841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.851 [2024-12-13 03:49:07.831854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.851 qpair failed and we were unable to recover it. 00:38:06.851 [2024-12-13 03:49:07.832017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.851 [2024-12-13 03:49:07.832031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.851 qpair failed and we were unable to recover it. 00:38:06.851 [2024-12-13 03:49:07.832125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.851 [2024-12-13 03:49:07.832138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.851 qpair failed and we were unable to recover it. 00:38:06.851 [2024-12-13 03:49:07.832231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.851 [2024-12-13 03:49:07.832245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.851 qpair failed and we were unable to recover it. 00:38:06.851 [2024-12-13 03:49:07.832316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.851 [2024-12-13 03:49:07.832330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.851 qpair failed and we were unable to recover it. 00:38:06.851 [2024-12-13 03:49:07.832484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.851 [2024-12-13 03:49:07.832497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.851 qpair failed and we were unable to recover it. 00:38:06.851 [2024-12-13 03:49:07.832567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.851 [2024-12-13 03:49:07.832580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.851 qpair failed and we were unable to recover it. 00:38:06.851 [2024-12-13 03:49:07.832667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.851 [2024-12-13 03:49:07.832679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.851 qpair failed and we were unable to recover it. 00:38:06.851 [2024-12-13 03:49:07.832882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.851 [2024-12-13 03:49:07.832895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.851 qpair failed and we were unable to recover it. 00:38:06.851 [2024-12-13 03:49:07.832988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.851 [2024-12-13 03:49:07.833002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.851 qpair failed and we were unable to recover it. 00:38:06.851 [2024-12-13 03:49:07.833146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.851 [2024-12-13 03:49:07.833160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.851 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.833449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.833490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.833680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.833723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.833938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.833984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.834253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.834296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.834462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.834475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.834669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.834711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.834929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.834973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.835181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.835224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.835363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.835405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.835526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.835568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.835670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.835683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.835821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.835834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.835996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.836010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.836175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.836191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.836390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.836433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.836660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.836701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.836928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.836971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.837175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.837216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.837421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.837461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.837719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.837761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.837964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.838009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.838265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.838325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.838478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.838532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.838742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.838756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.838889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.838903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.839177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.839221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.839423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.839465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.839680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.839722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.839850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.839892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.840188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.840230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.840437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.840479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.840606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.840648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.840899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.840956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.841100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.841114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.841314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.841327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.841426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.841440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.841687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.841728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.841968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.852 [2024-12-13 03:49:07.842011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.852 qpair failed and we were unable to recover it. 00:38:06.852 [2024-12-13 03:49:07.842141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.842184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.842322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.842364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.842672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.842726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.842796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.842809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.842965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.842979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.843206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.843220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.843373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.843387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.843477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.843490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.843690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.843703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.843857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.843871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.844005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.844020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.844108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.844121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.844204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.844216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.844297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.844316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.844405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.844418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.844474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.844490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.844573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.844585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.844675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.844688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.844835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.844848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.844991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.845006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.845085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.845098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.845176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.845188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.845271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.845284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.845443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.845456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.845534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.845547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.845685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.845698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.845911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.845966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.846188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.846230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.846423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.846466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.846717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.846731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.846901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.846915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.847111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.847152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.847377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.847420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.847558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.847599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.847822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.847836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.847923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.847936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.848076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.848090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.848246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.848260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.853 qpair failed and we were unable to recover it. 00:38:06.853 [2024-12-13 03:49:07.848421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.853 [2024-12-13 03:49:07.848476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.854 qpair failed and we were unable to recover it. 00:38:06.854 [2024-12-13 03:49:07.848687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.854 [2024-12-13 03:49:07.848729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.854 qpair failed and we were unable to recover it. 00:38:06.854 [2024-12-13 03:49:07.848853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.854 [2024-12-13 03:49:07.848895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.854 qpair failed and we were unable to recover it. 00:38:06.854 [2024-12-13 03:49:07.849097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.854 [2024-12-13 03:49:07.849139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.854 qpair failed and we were unable to recover it. 00:38:06.854 [2024-12-13 03:49:07.849346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.854 [2024-12-13 03:49:07.849389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.854 qpair failed and we were unable to recover it. 00:38:06.854 [2024-12-13 03:49:07.849612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.854 [2024-12-13 03:49:07.849626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.854 qpair failed and we were unable to recover it. 00:38:06.854 [2024-12-13 03:49:07.849774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.854 [2024-12-13 03:49:07.849788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.854 qpair failed and we were unable to recover it. 00:38:06.854 [2024-12-13 03:49:07.849935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.854 [2024-12-13 03:49:07.849949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.854 qpair failed and we were unable to recover it. 00:38:06.854 [2024-12-13 03:49:07.850137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.854 [2024-12-13 03:49:07.850153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.854 qpair failed and we were unable to recover it. 00:38:06.854 [2024-12-13 03:49:07.850257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.854 [2024-12-13 03:49:07.850275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.854 qpair failed and we were unable to recover it. 00:38:06.854 [2024-12-13 03:49:07.850355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.854 [2024-12-13 03:49:07.850367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.854 qpair failed and we were unable to recover it. 00:38:06.854 [2024-12-13 03:49:07.850527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.854 [2024-12-13 03:49:07.850541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.854 qpair failed and we were unable to recover it. 00:38:06.854 [2024-12-13 03:49:07.850639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.854 [2024-12-13 03:49:07.850652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.854 qpair failed and we were unable to recover it. 00:38:06.854 [2024-12-13 03:49:07.850862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.854 [2024-12-13 03:49:07.850905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.854 qpair failed and we were unable to recover it. 00:38:06.854 [2024-12-13 03:49:07.851077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.854 [2024-12-13 03:49:07.851119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.854 qpair failed and we were unable to recover it. 00:38:06.854 [2024-12-13 03:49:07.851271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.854 [2024-12-13 03:49:07.851313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.854 qpair failed and we were unable to recover it. 00:38:06.854 [2024-12-13 03:49:07.851449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.854 [2024-12-13 03:49:07.851462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.854 qpair failed and we were unable to recover it. 00:38:06.854 [2024-12-13 03:49:07.851724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.854 [2024-12-13 03:49:07.851773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.854 qpair failed and we were unable to recover it. 00:38:06.854 [2024-12-13 03:49:07.852050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.854 [2024-12-13 03:49:07.852093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.854 qpair failed and we were unable to recover it. 00:38:06.854 [2024-12-13 03:49:07.852240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.854 [2024-12-13 03:49:07.852282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.854 qpair failed and we were unable to recover it. 00:38:06.854 [2024-12-13 03:49:07.852454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.854 [2024-12-13 03:49:07.852469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.854 qpair failed and we were unable to recover it. 00:38:06.854 [2024-12-13 03:49:07.852656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.854 [2024-12-13 03:49:07.852698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.854 qpair failed and we were unable to recover it. 00:38:06.854 [2024-12-13 03:49:07.852951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.854 [2024-12-13 03:49:07.852995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.854 qpair failed and we were unable to recover it. 00:38:06.854 [2024-12-13 03:49:07.853226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.854 [2024-12-13 03:49:07.853267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.854 qpair failed and we were unable to recover it. 00:38:06.854 [2024-12-13 03:49:07.853500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.854 [2024-12-13 03:49:07.853543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.854 qpair failed and we were unable to recover it. 00:38:06.854 [2024-12-13 03:49:07.853763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.854 [2024-12-13 03:49:07.853807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.854 qpair failed and we were unable to recover it. 00:38:06.854 [2024-12-13 03:49:07.854015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.854 [2024-12-13 03:49:07.854058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.854 qpair failed and we were unable to recover it. 00:38:06.854 [2024-12-13 03:49:07.854201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.854 [2024-12-13 03:49:07.854243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.854 qpair failed and we were unable to recover it. 00:38:06.854 [2024-12-13 03:49:07.854390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.854 [2024-12-13 03:49:07.854432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.854 qpair failed and we were unable to recover it. 00:38:06.854 [2024-12-13 03:49:07.854655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.854 [2024-12-13 03:49:07.854669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.854 qpair failed and we were unable to recover it. 00:38:06.854 [2024-12-13 03:49:07.854803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.854 [2024-12-13 03:49:07.854816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.854 qpair failed and we were unable to recover it. 00:38:06.854 [2024-12-13 03:49:07.854971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.854 [2024-12-13 03:49:07.854985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.854 qpair failed and we were unable to recover it. 00:38:06.854 [2024-12-13 03:49:07.855072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.854 [2024-12-13 03:49:07.855085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.854 qpair failed and we were unable to recover it. 00:38:06.854 [2024-12-13 03:49:07.855247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.855259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.855407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.855420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.855659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.855701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.855981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.856025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.856189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.856231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.856419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.856432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.856586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.856600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.856688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.856700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.856858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.856872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.856947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.856960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.857095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.857109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.857197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.857209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.857292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.857304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.857509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.857523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.857609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.857621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.857689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.857702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.857884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.857899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.857978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.857992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.858121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.858135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.858282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.858296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.858445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.858458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.858594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.858608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.858706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.858718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.858862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.858876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.858964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.858979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.859115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.859128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.859355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.859396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.859599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.859640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.859760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.859800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.860010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.860053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.860268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.860309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.860520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.860561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.860657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.860670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.860870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.860884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.860984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.860999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.861200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.861213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.861354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.861368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.861452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.861466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.861605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.855 [2024-12-13 03:49:07.861618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.855 qpair failed and we were unable to recover it. 00:38:06.855 [2024-12-13 03:49:07.861765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.861780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.861938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.861957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.862135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.862149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.862236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.862249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.862332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.862345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.862489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.862502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.862716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.862729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.862833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.862845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.862936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.862949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.863109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.863123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.863264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.863278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.863351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.863363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.863538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.863552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.863642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.863654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.863790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.863804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.864008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.864022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.864192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.864206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.864370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.864415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.864554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.864595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.864810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.864852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.865068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.865111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.865392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.865434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.865670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.865684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.865782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.865796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.866011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.866025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.866095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.866111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.866259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.866273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.866426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.866440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.866509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.866521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.866730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.866744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.866818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.866830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.866983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.866998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.867170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.867213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.867417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.867459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.867752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.867794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.867893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.867906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.867987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.868007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.868142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.856 [2024-12-13 03:49:07.868155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.856 qpair failed and we were unable to recover it. 00:38:06.856 [2024-12-13 03:49:07.868347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.868390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.868607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.868648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.868905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.868959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.869105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.869147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.869369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.869409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.869532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.869545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.869621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.869633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.869767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.869781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.869863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.869875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.869951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.869965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.870040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.870053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.870187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.870200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.870403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.870417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.870556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.870569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.870707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.870722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.870996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.871010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.871160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.871174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.871266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.871280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.871374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.871386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.871519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.871532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.871666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.871680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.871749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.871761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.871827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.871839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.872038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.872052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.872120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.872132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.872209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.872221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.872301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.872314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.872458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.872478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.872570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.872583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.872661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.872673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.872751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.872765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.872926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.872941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.873023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.873035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.873239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.873252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.873338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.873350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.873482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.873495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.873583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.873596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.873692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.873706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.873847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.857 [2024-12-13 03:49:07.873860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.857 qpair failed and we were unable to recover it. 00:38:06.857 [2024-12-13 03:49:07.874014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.858 [2024-12-13 03:49:07.874029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.858 qpair failed and we were unable to recover it. 00:38:06.858 [2024-12-13 03:49:07.874194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.858 [2024-12-13 03:49:07.874208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.858 qpair failed and we were unable to recover it. 00:38:06.858 [2024-12-13 03:49:07.874284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.858 [2024-12-13 03:49:07.874296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.858 qpair failed and we were unable to recover it. 00:38:06.858 [2024-12-13 03:49:07.874429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.858 [2024-12-13 03:49:07.874443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.858 qpair failed and we were unable to recover it. 00:38:06.858 [2024-12-13 03:49:07.874521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.858 [2024-12-13 03:49:07.874533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.858 qpair failed and we were unable to recover it. 00:38:06.858 [2024-12-13 03:49:07.874738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.858 [2024-12-13 03:49:07.874752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.858 qpair failed and we were unable to recover it. 00:38:06.858 [2024-12-13 03:49:07.874833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.858 [2024-12-13 03:49:07.874846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.858 qpair failed and we were unable to recover it. 00:38:06.858 [2024-12-13 03:49:07.874930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.858 [2024-12-13 03:49:07.874944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.858 qpair failed and we were unable to recover it. 00:38:06.858 [2024-12-13 03:49:07.875013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.858 [2024-12-13 03:49:07.875026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.858 qpair failed and we were unable to recover it. 00:38:06.858 [2024-12-13 03:49:07.875163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.858 [2024-12-13 03:49:07.875176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.858 qpair failed and we were unable to recover it. 00:38:06.858 [2024-12-13 03:49:07.875318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.858 [2024-12-13 03:49:07.875331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.858 qpair failed and we were unable to recover it. 00:38:06.858 [2024-12-13 03:49:07.875397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.858 [2024-12-13 03:49:07.875410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.858 qpair failed and we were unable to recover it. 00:38:06.858 [2024-12-13 03:49:07.875554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.858 [2024-12-13 03:49:07.875567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.858 qpair failed and we were unable to recover it. 00:38:06.858 [2024-12-13 03:49:07.875634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.858 [2024-12-13 03:49:07.875647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.858 qpair failed and we were unable to recover it. 00:38:06.858 [2024-12-13 03:49:07.875797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.858 [2024-12-13 03:49:07.875810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.858 qpair failed and we were unable to recover it. 00:38:06.858 [2024-12-13 03:49:07.876054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.858 [2024-12-13 03:49:07.876098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.858 qpair failed and we were unable to recover it. 00:38:06.858 [2024-12-13 03:49:07.876309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.858 [2024-12-13 03:49:07.876354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.858 qpair failed and we were unable to recover it. 00:38:06.858 [2024-12-13 03:49:07.876637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.858 [2024-12-13 03:49:07.876682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.858 qpair failed and we were unable to recover it. 00:38:06.858 [2024-12-13 03:49:07.876793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.858 [2024-12-13 03:49:07.876809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.858 qpair failed and we were unable to recover it. 00:38:06.858 [2024-12-13 03:49:07.876889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.858 [2024-12-13 03:49:07.876901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.858 qpair failed and we were unable to recover it. 00:38:06.858 [2024-12-13 03:49:07.877056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.858 [2024-12-13 03:49:07.877070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.858 qpair failed and we were unable to recover it. 00:38:06.858 [2024-12-13 03:49:07.877281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.858 [2024-12-13 03:49:07.877322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.858 qpair failed and we were unable to recover it. 00:38:06.858 [2024-12-13 03:49:07.877472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.858 [2024-12-13 03:49:07.877515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.858 qpair failed and we were unable to recover it. 00:38:06.858 [2024-12-13 03:49:07.877835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.858 [2024-12-13 03:49:07.877877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.858 qpair failed and we were unable to recover it. 00:38:06.858 [2024-12-13 03:49:07.878104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.858 [2024-12-13 03:49:07.878150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.858 qpair failed and we were unable to recover it. 00:38:06.858 [2024-12-13 03:49:07.878366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.858 [2024-12-13 03:49:07.878380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.858 qpair failed and we were unable to recover it. 00:38:06.858 [2024-12-13 03:49:07.878462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.858 [2024-12-13 03:49:07.878474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.858 qpair failed and we were unable to recover it. 00:38:06.858 [2024-12-13 03:49:07.878552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.858 [2024-12-13 03:49:07.878569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.858 qpair failed and we were unable to recover it. 00:38:06.858 [2024-12-13 03:49:07.878714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.858 [2024-12-13 03:49:07.878730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.858 qpair failed and we were unable to recover it. 00:38:06.858 [2024-12-13 03:49:07.878819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.858 [2024-12-13 03:49:07.878835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.858 qpair failed and we were unable to recover it. 00:38:06.858 [2024-12-13 03:49:07.879097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.858 [2024-12-13 03:49:07.879140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.858 qpair failed and we were unable to recover it. 00:38:06.858 [2024-12-13 03:49:07.879353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.858 [2024-12-13 03:49:07.879366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.858 qpair failed and we were unable to recover it. 00:38:06.858 [2024-12-13 03:49:07.879470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.858 [2024-12-13 03:49:07.879483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.858 qpair failed and we were unable to recover it. 00:38:06.858 [2024-12-13 03:49:07.879623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.858 [2024-12-13 03:49:07.879635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.858 qpair failed and we were unable to recover it. 00:38:06.858 [2024-12-13 03:49:07.879775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.858 [2024-12-13 03:49:07.879789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.858 qpair failed and we were unable to recover it. 00:38:06.859 [2024-12-13 03:49:07.879975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.859 [2024-12-13 03:49:07.879989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.859 qpair failed and we were unable to recover it. 00:38:06.859 [2024-12-13 03:49:07.880138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.859 [2024-12-13 03:49:07.880152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.859 qpair failed and we were unable to recover it. 00:38:06.859 [2024-12-13 03:49:07.880358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.859 [2024-12-13 03:49:07.880372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.859 qpair failed and we were unable to recover it. 00:38:06.859 [2024-12-13 03:49:07.880468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.859 [2024-12-13 03:49:07.880482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.859 qpair failed and we were unable to recover it. 00:38:06.859 [2024-12-13 03:49:07.880557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.859 [2024-12-13 03:49:07.880569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.859 qpair failed and we were unable to recover it. 00:38:06.859 [2024-12-13 03:49:07.880648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.859 [2024-12-13 03:49:07.880660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.859 qpair failed and we were unable to recover it. 00:38:06.859 [2024-12-13 03:49:07.880799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.859 [2024-12-13 03:49:07.880813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.859 qpair failed and we were unable to recover it. 00:38:06.859 [2024-12-13 03:49:07.880903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.859 [2024-12-13 03:49:07.880922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.859 qpair failed and we were unable to recover it. 00:38:06.859 [2024-12-13 03:49:07.881017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.859 [2024-12-13 03:49:07.881031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.859 qpair failed and we were unable to recover it. 00:38:06.859 [2024-12-13 03:49:07.881123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.859 [2024-12-13 03:49:07.881136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.859 qpair failed and we were unable to recover it. 00:38:06.859 [2024-12-13 03:49:07.881271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.859 [2024-12-13 03:49:07.881284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.859 qpair failed and we were unable to recover it. 00:38:06.859 [2024-12-13 03:49:07.881361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.859 [2024-12-13 03:49:07.881374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.859 qpair failed and we were unable to recover it. 00:38:06.859 [2024-12-13 03:49:07.881506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.859 [2024-12-13 03:49:07.881519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.859 qpair failed and we were unable to recover it. 00:38:06.859 [2024-12-13 03:49:07.881657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.859 [2024-12-13 03:49:07.881671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.859 qpair failed and we were unable to recover it. 00:38:06.859 [2024-12-13 03:49:07.881740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.859 [2024-12-13 03:49:07.881752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.859 qpair failed and we were unable to recover it. 00:38:06.859 [2024-12-13 03:49:07.881900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.859 [2024-12-13 03:49:07.881914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.859 qpair failed and we were unable to recover it. 00:38:06.859 [2024-12-13 03:49:07.882065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.859 [2024-12-13 03:49:07.882078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.859 qpair failed and we were unable to recover it. 00:38:06.859 [2024-12-13 03:49:07.882277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.859 [2024-12-13 03:49:07.882290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.859 qpair failed and we were unable to recover it. 00:38:06.859 [2024-12-13 03:49:07.882384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.859 [2024-12-13 03:49:07.882397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.859 qpair failed and we were unable to recover it. 00:38:06.859 [2024-12-13 03:49:07.882481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.859 [2024-12-13 03:49:07.882494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.859 qpair failed and we were unable to recover it. 00:38:06.859 [2024-12-13 03:49:07.882604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.859 [2024-12-13 03:49:07.882633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.859 qpair failed and we were unable to recover it. 00:38:06.859 [2024-12-13 03:49:07.882744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.859 [2024-12-13 03:49:07.882771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.859 qpair failed and we were unable to recover it. 00:38:06.859 [2024-12-13 03:49:07.882956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.859 [2024-12-13 03:49:07.882985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.859 qpair failed and we were unable to recover it. 00:38:06.859 [2024-12-13 03:49:07.883079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.859 [2024-12-13 03:49:07.883098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.859 qpair failed and we were unable to recover it. 00:38:06.859 [2024-12-13 03:49:07.883169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.859 [2024-12-13 03:49:07.883182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.859 qpair failed and we were unable to recover it. 00:38:06.859 [2024-12-13 03:49:07.883323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.859 [2024-12-13 03:49:07.883336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.859 qpair failed and we were unable to recover it. 00:38:06.859 [2024-12-13 03:49:07.883414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.859 [2024-12-13 03:49:07.883427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.859 qpair failed and we were unable to recover it. 00:38:06.859 [2024-12-13 03:49:07.883501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.859 [2024-12-13 03:49:07.883513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.859 qpair failed and we were unable to recover it. 00:38:06.859 [2024-12-13 03:49:07.883651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.859 [2024-12-13 03:49:07.883664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.859 qpair failed and we were unable to recover it. 00:38:06.859 [2024-12-13 03:49:07.883858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.859 [2024-12-13 03:49:07.883901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.859 qpair failed and we were unable to recover it. 00:38:06.859 [2024-12-13 03:49:07.884070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.859 [2024-12-13 03:49:07.884114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.859 qpair failed and we were unable to recover it. 00:38:06.859 [2024-12-13 03:49:07.884319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.859 [2024-12-13 03:49:07.884360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.859 qpair failed and we were unable to recover it. 00:38:06.859 [2024-12-13 03:49:07.884504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.859 [2024-12-13 03:49:07.884541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.859 qpair failed and we were unable to recover it. 00:38:06.859 [2024-12-13 03:49:07.884739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.859 [2024-12-13 03:49:07.884754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.859 qpair failed and we were unable to recover it. 00:38:06.859 [2024-12-13 03:49:07.884995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.859 [2024-12-13 03:49:07.885010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.885147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.885161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.885245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.885259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.885341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.885355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.885426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.885438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.885506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.885518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.885610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.885623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.885770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.885783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.885935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.885949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.886020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.886033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.886206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.886219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.886296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.886308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.886392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.886406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.886557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.886570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.886727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.886741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.886875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.886888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.886977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.886991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.887168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.887211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.887409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.887449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.887602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.887643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.887844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.887857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.888104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.888117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.888304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.888346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.888553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.888593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.888765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.888806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.888962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.888976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.889174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.889201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.889320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.889347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.889581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.889607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.889717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.889732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.889867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.889881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.889973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.889986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.890140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.890153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.890333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.890347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.890430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.890444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.890536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.890549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.890689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.890703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.890839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.890852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.890941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.890955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.860 qpair failed and we were unable to recover it. 00:38:06.860 [2024-12-13 03:49:07.891120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.860 [2024-12-13 03:49:07.891135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.891217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.891229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.891310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.891323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.891412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.891425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.891559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.891573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.891656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.891669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.891805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.891818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.891911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.891937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.892074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.892087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.892185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.892199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.892289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.892301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.892373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.892386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.892559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.892572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.892721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.892734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.892994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.893007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.893157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.893169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.893252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.893268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.893426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.893444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.893576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.893589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.893681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.893695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.893851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.893864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.894021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.894035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.894185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.894198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.894279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.894292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.894376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.894389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.894473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.894486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.894688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.894702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.894865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.894889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.895068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.895142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.895444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.895492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.895767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.895788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.895895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.895916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.896091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.896137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.896459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.896505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.896638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.896692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.896930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.896952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.897057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.897079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.897197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.897217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.861 qpair failed and we were unable to recover it. 00:38:06.861 [2024-12-13 03:49:07.897432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.861 [2024-12-13 03:49:07.897453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.862 [2024-12-13 03:49:07.897696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.862 [2024-12-13 03:49:07.897711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.862 [2024-12-13 03:49:07.897868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.862 [2024-12-13 03:49:07.897884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.862 [2024-12-13 03:49:07.897971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.862 [2024-12-13 03:49:07.897985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.862 [2024-12-13 03:49:07.898115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.862 [2024-12-13 03:49:07.898129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.862 [2024-12-13 03:49:07.898227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.862 [2024-12-13 03:49:07.898240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.862 [2024-12-13 03:49:07.898397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.862 [2024-12-13 03:49:07.898410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.862 [2024-12-13 03:49:07.898483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.862 [2024-12-13 03:49:07.898496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.862 [2024-12-13 03:49:07.898596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.862 [2024-12-13 03:49:07.898609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.862 [2024-12-13 03:49:07.898707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.862 [2024-12-13 03:49:07.898719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.862 [2024-12-13 03:49:07.898869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.862 [2024-12-13 03:49:07.898882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.862 [2024-12-13 03:49:07.899031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.862 [2024-12-13 03:49:07.899045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.862 [2024-12-13 03:49:07.899208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.862 [2024-12-13 03:49:07.899222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.862 [2024-12-13 03:49:07.899323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.862 [2024-12-13 03:49:07.899336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.862 [2024-12-13 03:49:07.899489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.862 [2024-12-13 03:49:07.899504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.862 [2024-12-13 03:49:07.899640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.862 [2024-12-13 03:49:07.899653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.862 [2024-12-13 03:49:07.899736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.862 [2024-12-13 03:49:07.899750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.862 [2024-12-13 03:49:07.899827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.862 [2024-12-13 03:49:07.899841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.862 [2024-12-13 03:49:07.899908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.862 [2024-12-13 03:49:07.899925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.862 [2024-12-13 03:49:07.900061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.862 [2024-12-13 03:49:07.900075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.862 [2024-12-13 03:49:07.900210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.862 [2024-12-13 03:49:07.900225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.862 [2024-12-13 03:49:07.900368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.862 [2024-12-13 03:49:07.900381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.862 [2024-12-13 03:49:07.900482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.862 [2024-12-13 03:49:07.900496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.862 [2024-12-13 03:49:07.900713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.862 [2024-12-13 03:49:07.900754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.862 [2024-12-13 03:49:07.900887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.862 [2024-12-13 03:49:07.900955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.862 [2024-12-13 03:49:07.901112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.862 [2024-12-13 03:49:07.901154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.862 [2024-12-13 03:49:07.901352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.862 [2024-12-13 03:49:07.901392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.862 [2024-12-13 03:49:07.901498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.862 [2024-12-13 03:49:07.901515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.862 [2024-12-13 03:49:07.901593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.862 [2024-12-13 03:49:07.901607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.862 [2024-12-13 03:49:07.901774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.862 [2024-12-13 03:49:07.901799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.862 [2024-12-13 03:49:07.901986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.862 [2024-12-13 03:49:07.902050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.862 [2024-12-13 03:49:07.902353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.862 [2024-12-13 03:49:07.902405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.862 [2024-12-13 03:49:07.902565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.862 [2024-12-13 03:49:07.902581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.862 [2024-12-13 03:49:07.902737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.862 [2024-12-13 03:49:07.902750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.862 [2024-12-13 03:49:07.902845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.862 [2024-12-13 03:49:07.902858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.862 [2024-12-13 03:49:07.903014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.862 [2024-12-13 03:49:07.903028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.862 [2024-12-13 03:49:07.903126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.862 [2024-12-13 03:49:07.903139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.862 [2024-12-13 03:49:07.903218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.862 [2024-12-13 03:49:07.903231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.862 qpair failed and we were unable to recover it. 00:38:06.863 [2024-12-13 03:49:07.903404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.863 [2024-12-13 03:49:07.903424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.863 qpair failed and we were unable to recover it. 00:38:06.863 [2024-12-13 03:49:07.903509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.863 [2024-12-13 03:49:07.903523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.863 qpair failed and we were unable to recover it. 00:38:06.863 [2024-12-13 03:49:07.903677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.863 [2024-12-13 03:49:07.903691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.863 qpair failed and we were unable to recover it. 00:38:06.863 [2024-12-13 03:49:07.903790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.863 [2024-12-13 03:49:07.903805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.863 qpair failed and we were unable to recover it. 00:38:06.863 [2024-12-13 03:49:07.903895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.863 [2024-12-13 03:49:07.903911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.863 qpair failed and we were unable to recover it. 00:38:06.863 [2024-12-13 03:49:07.904058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.863 [2024-12-13 03:49:07.904071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.863 qpair failed and we were unable to recover it. 00:38:06.863 [2024-12-13 03:49:07.904155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.863 [2024-12-13 03:49:07.904171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.863 qpair failed and we were unable to recover it. 00:38:06.863 [2024-12-13 03:49:07.904371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.863 [2024-12-13 03:49:07.904385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.863 qpair failed and we were unable to recover it. 00:38:06.863 [2024-12-13 03:49:07.904588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.863 [2024-12-13 03:49:07.904600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.863 qpair failed and we were unable to recover it. 00:38:06.863 [2024-12-13 03:49:07.904695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.863 [2024-12-13 03:49:07.904708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.863 qpair failed and we were unable to recover it. 00:38:06.863 [2024-12-13 03:49:07.904785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.863 [2024-12-13 03:49:07.904797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.863 qpair failed and we were unable to recover it. 00:38:06.863 [2024-12-13 03:49:07.904896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.863 [2024-12-13 03:49:07.904909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.863 qpair failed and we were unable to recover it. 00:38:06.863 [2024-12-13 03:49:07.904999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.863 [2024-12-13 03:49:07.905014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.863 qpair failed and we were unable to recover it. 00:38:06.863 [2024-12-13 03:49:07.905150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.863 [2024-12-13 03:49:07.905163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.863 qpair failed and we were unable to recover it. 00:38:06.863 [2024-12-13 03:49:07.905329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.863 [2024-12-13 03:49:07.905342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.863 qpair failed and we were unable to recover it. 00:38:06.863 [2024-12-13 03:49:07.905415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.863 [2024-12-13 03:49:07.905428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.863 qpair failed and we were unable to recover it. 00:38:06.863 [2024-12-13 03:49:07.905509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.863 [2024-12-13 03:49:07.905523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.863 qpair failed and we were unable to recover it. 00:38:06.863 [2024-12-13 03:49:07.905674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.863 [2024-12-13 03:49:07.905688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.863 qpair failed and we were unable to recover it. 00:38:06.863 [2024-12-13 03:49:07.905860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.863 [2024-12-13 03:49:07.905873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.863 qpair failed and we were unable to recover it. 00:38:06.863 [2024-12-13 03:49:07.906155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.863 [2024-12-13 03:49:07.906202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.863 qpair failed and we were unable to recover it. 00:38:06.863 [2024-12-13 03:49:07.906422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.863 [2024-12-13 03:49:07.906469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.863 qpair failed and we were unable to recover it. 00:38:06.863 [2024-12-13 03:49:07.906675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.863 [2024-12-13 03:49:07.906697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.863 qpair failed and we were unable to recover it. 00:38:06.863 [2024-12-13 03:49:07.906879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.863 [2024-12-13 03:49:07.906903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.863 qpair failed and we were unable to recover it. 00:38:06.863 [2024-12-13 03:49:07.907081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.863 [2024-12-13 03:49:07.907104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.863 qpair failed and we were unable to recover it. 00:38:06.863 [2024-12-13 03:49:07.907273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.863 [2024-12-13 03:49:07.907318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.863 qpair failed and we were unable to recover it. 00:38:06.863 [2024-12-13 03:49:07.907528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.863 [2024-12-13 03:49:07.907571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.863 qpair failed and we were unable to recover it. 00:38:06.863 [2024-12-13 03:49:07.907787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.863 [2024-12-13 03:49:07.907831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.863 qpair failed and we were unable to recover it. 00:38:06.863 [2024-12-13 03:49:07.907985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.863 [2024-12-13 03:49:07.908031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.863 qpair failed and we were unable to recover it. 00:38:06.863 [2024-12-13 03:49:07.908227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.863 [2024-12-13 03:49:07.908272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.863 qpair failed and we were unable to recover it. 00:38:06.863 [2024-12-13 03:49:07.908563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.863 [2024-12-13 03:49:07.908584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.863 qpair failed and we were unable to recover it. 00:38:06.863 [2024-12-13 03:49:07.908699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.863 [2024-12-13 03:49:07.908721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.863 qpair failed and we were unable to recover it. 00:38:06.863 [2024-12-13 03:49:07.908971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.863 [2024-12-13 03:49:07.908999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.863 qpair failed and we were unable to recover it. 00:38:06.863 [2024-12-13 03:49:07.909104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.863 [2024-12-13 03:49:07.909125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.863 qpair failed and we were unable to recover it. 00:38:06.863 [2024-12-13 03:49:07.909261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.863 [2024-12-13 03:49:07.909274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.863 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.909360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.909373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.909506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.909520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.909601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.909614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.909746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.909760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.909975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.910018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.910206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.910247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.910394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.910435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.910629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.910642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.910774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.910787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.910902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.910966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.911182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.911225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.911369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.911412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.911549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.911562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.911643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.911657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.911806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.911819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.911961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.911974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.912137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.912178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.912380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.912420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.912635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.912677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.912868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.912882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.913050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.913093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.913398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.913448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.913744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.913769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.913940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.913961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.914140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.914184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.914349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.914391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.914600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.914645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.914894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.914909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.915070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.915083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.915236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.915250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.915487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.915530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.915658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.915701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.915906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.915969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.916106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.916148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.916356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.916398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.916612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.916656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.916826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.916839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.916992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.864 [2024-12-13 03:49:07.917043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.864 qpair failed and we were unable to recover it. 00:38:06.864 [2024-12-13 03:49:07.917280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.917322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.917556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.917603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.917718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.917738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.917930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.917951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.918137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.918158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.918329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.918350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.918498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.918518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.918698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.918753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.918984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.919032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.919235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.919278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.919483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.919528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.919766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.919807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.919998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.920041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.920275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.920318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.920535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.920550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.920641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.920655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.920797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.920811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.920976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.920990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.921081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.921095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.921174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.921186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.921325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.921338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.921423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.921435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.921512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.921524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.921685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.921699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.921844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.921858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.922004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.922018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.922116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.922131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.922267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.922281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.922367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.922382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.922527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.922541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.922619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.922631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.922867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.922909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.923078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.923121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.923254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.923295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.923511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.923554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.923767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.923812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.923962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.924005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.924156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.924208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.924504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.865 [2024-12-13 03:49:07.924547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-12-13 03:49:07.924748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.924772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.924939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.924960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.925122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.925143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.925325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.925346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.925583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.925604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.925696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.925710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.925807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.925819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.925984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.925997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.926135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.926148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.926290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.926303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.926442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.926456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.926613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.926628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.926773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.926790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.926937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.926951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.927044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.927057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.927124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.927136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.927292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.927305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.927371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.927384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.927461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.927473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.927616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.927628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.927825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.927838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.927934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.927947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.928092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.928105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.928196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.928209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.928364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.928377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.928459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.928473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.928546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.928558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.928646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.928659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.928726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.928739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.928828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.928842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.929007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.929020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.929105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.929118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.929256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.929270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.929355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.929368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.929504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.929517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.929665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.929679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.929754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.929767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.929866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.929880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-12-13 03:49:07.930014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.866 [2024-12-13 03:49:07.930030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.867 [2024-12-13 03:49:07.930094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.867 [2024-12-13 03:49:07.930107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-12-13 03:49:07.930195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.867 [2024-12-13 03:49:07.930210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-12-13 03:49:07.930356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.867 [2024-12-13 03:49:07.930370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-12-13 03:49:07.930524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.867 [2024-12-13 03:49:07.930537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-12-13 03:49:07.930617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.867 [2024-12-13 03:49:07.930630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-12-13 03:49:07.930727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.867 [2024-12-13 03:49:07.930739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-12-13 03:49:07.930823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.867 [2024-12-13 03:49:07.930836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-12-13 03:49:07.930932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.867 [2024-12-13 03:49:07.930945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-12-13 03:49:07.931025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.867 [2024-12-13 03:49:07.931040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-12-13 03:49:07.931113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.867 [2024-12-13 03:49:07.931127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-12-13 03:49:07.931217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.867 [2024-12-13 03:49:07.931229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-12-13 03:49:07.931302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.867 [2024-12-13 03:49:07.931316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-12-13 03:49:07.931397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.867 [2024-12-13 03:49:07.931409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-12-13 03:49:07.931548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.867 [2024-12-13 03:49:07.931562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-12-13 03:49:07.931631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.867 [2024-12-13 03:49:07.931643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-12-13 03:49:07.931804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.867 [2024-12-13 03:49:07.931816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-12-13 03:49:07.931900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.867 [2024-12-13 03:49:07.931914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-12-13 03:49:07.932020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.867 [2024-12-13 03:49:07.932033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-12-13 03:49:07.932106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.867 [2024-12-13 03:49:07.932119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-12-13 03:49:07.932290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.867 [2024-12-13 03:49:07.932304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-12-13 03:49:07.932468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.867 [2024-12-13 03:49:07.932481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-12-13 03:49:07.932562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.867 [2024-12-13 03:49:07.932574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-12-13 03:49:07.932642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.867 [2024-12-13 03:49:07.932655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-12-13 03:49:07.932800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.867 [2024-12-13 03:49:07.932812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-12-13 03:49:07.932986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.867 [2024-12-13 03:49:07.932999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-12-13 03:49:07.933079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.867 [2024-12-13 03:49:07.933093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-12-13 03:49:07.933257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.867 [2024-12-13 03:49:07.933270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-12-13 03:49:07.933366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.867 [2024-12-13 03:49:07.933379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-12-13 03:49:07.933461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.867 [2024-12-13 03:49:07.933475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-12-13 03:49:07.933552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.867 [2024-12-13 03:49:07.933564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-12-13 03:49:07.933627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.867 [2024-12-13 03:49:07.933640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-12-13 03:49:07.933779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.867 [2024-12-13 03:49:07.933791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-12-13 03:49:07.933928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.867 [2024-12-13 03:49:07.933942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-12-13 03:49:07.934080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.867 [2024-12-13 03:49:07.934095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-12-13 03:49:07.934239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.867 [2024-12-13 03:49:07.934253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-12-13 03:49:07.934362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.867 [2024-12-13 03:49:07.934376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.868 [2024-12-13 03:49:07.934463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.868 [2024-12-13 03:49:07.934477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.868 qpair failed and we were unable to recover it. 00:38:06.868 [2024-12-13 03:49:07.934558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.868 [2024-12-13 03:49:07.934588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.868 qpair failed and we were unable to recover it. 00:38:06.868 [2024-12-13 03:49:07.934802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.868 [2024-12-13 03:49:07.934816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.868 qpair failed and we were unable to recover it. 00:38:06.868 [2024-12-13 03:49:07.934959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.868 [2024-12-13 03:49:07.934972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.868 qpair failed and we were unable to recover it. 00:38:06.868 [2024-12-13 03:49:07.935048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.868 [2024-12-13 03:49:07.935062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.868 qpair failed and we were unable to recover it. 00:38:06.868 [2024-12-13 03:49:07.935262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.868 [2024-12-13 03:49:07.935277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.868 qpair failed and we were unable to recover it. 00:38:06.868 [2024-12-13 03:49:07.935450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.868 [2024-12-13 03:49:07.935463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.868 qpair failed and we were unable to recover it. 00:38:06.868 [2024-12-13 03:49:07.935666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.868 [2024-12-13 03:49:07.935680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.868 qpair failed and we were unable to recover it. 00:38:06.868 [2024-12-13 03:49:07.935749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.868 [2024-12-13 03:49:07.935761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.868 qpair failed and we were unable to recover it. 00:38:06.868 [2024-12-13 03:49:07.935842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.868 [2024-12-13 03:49:07.935856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.868 qpair failed and we were unable to recover it. 00:38:06.868 [2024-12-13 03:49:07.936001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.868 [2024-12-13 03:49:07.936016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.868 qpair failed and we were unable to recover it. 00:38:06.868 [2024-12-13 03:49:07.936225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.868 [2024-12-13 03:49:07.936266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.868 qpair failed and we were unable to recover it. 00:38:06.868 [2024-12-13 03:49:07.936399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.868 [2024-12-13 03:49:07.936440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.868 qpair failed and we were unable to recover it. 00:38:06.868 [2024-12-13 03:49:07.936638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.868 [2024-12-13 03:49:07.936680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.868 qpair failed and we were unable to recover it. 00:38:06.868 [2024-12-13 03:49:07.936819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.868 [2024-12-13 03:49:07.936832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.868 qpair failed and we were unable to recover it. 00:38:06.868 [2024-12-13 03:49:07.936923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.868 [2024-12-13 03:49:07.936936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.868 qpair failed and we were unable to recover it. 00:38:06.868 [2024-12-13 03:49:07.937076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.868 [2024-12-13 03:49:07.937090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.868 qpair failed and we were unable to recover it. 00:38:06.868 [2024-12-13 03:49:07.937306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.868 [2024-12-13 03:49:07.937347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.868 qpair failed and we were unable to recover it. 00:38:06.868 [2024-12-13 03:49:07.937606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.868 [2024-12-13 03:49:07.937647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.868 qpair failed and we were unable to recover it. 00:38:06.868 [2024-12-13 03:49:07.937781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.868 [2024-12-13 03:49:07.937828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.868 qpair failed and we were unable to recover it. 00:38:06.868 [2024-12-13 03:49:07.937974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.868 [2024-12-13 03:49:07.937988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.868 qpair failed and we were unable to recover it. 00:38:06.868 [2024-12-13 03:49:07.938077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.868 [2024-12-13 03:49:07.938090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.868 qpair failed and we were unable to recover it. 00:38:06.868 [2024-12-13 03:49:07.938230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.868 [2024-12-13 03:49:07.938244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.868 qpair failed and we were unable to recover it. 00:38:06.868 [2024-12-13 03:49:07.938324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.868 [2024-12-13 03:49:07.938337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.868 qpair failed and we were unable to recover it. 00:38:06.868 [2024-12-13 03:49:07.938405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.868 [2024-12-13 03:49:07.938417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.868 qpair failed and we were unable to recover it. 00:38:06.868 [2024-12-13 03:49:07.938560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.868 [2024-12-13 03:49:07.938573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.868 qpair failed and we were unable to recover it. 00:38:06.868 [2024-12-13 03:49:07.938718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.868 [2024-12-13 03:49:07.938731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.868 qpair failed and we were unable to recover it. 00:38:06.868 [2024-12-13 03:49:07.938862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.868 [2024-12-13 03:49:07.938882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.868 qpair failed and we were unable to recover it. 00:38:06.868 [2024-12-13 03:49:07.938964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.868 [2024-12-13 03:49:07.938976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.868 qpair failed and we were unable to recover it. 00:38:06.868 [2024-12-13 03:49:07.939056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.868 [2024-12-13 03:49:07.939069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.868 qpair failed and we were unable to recover it. 00:38:06.868 [2024-12-13 03:49:07.939207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.868 [2024-12-13 03:49:07.939220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.868 qpair failed and we were unable to recover it. 00:38:06.868 [2024-12-13 03:49:07.939442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.868 [2024-12-13 03:49:07.939455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.868 qpair failed and we were unable to recover it. 00:38:06.868 [2024-12-13 03:49:07.939608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.868 [2024-12-13 03:49:07.939664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.868 qpair failed and we were unable to recover it. 00:38:06.868 [2024-12-13 03:49:07.939859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.868 [2024-12-13 03:49:07.939901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.869 qpair failed and we were unable to recover it. 00:38:06.869 [2024-12-13 03:49:07.940111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.869 [2024-12-13 03:49:07.940154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.869 qpair failed and we were unable to recover it. 00:38:06.869 [2024-12-13 03:49:07.940299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.869 [2024-12-13 03:49:07.940341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.869 qpair failed and we were unable to recover it. 00:38:06.869 [2024-12-13 03:49:07.940469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.869 [2024-12-13 03:49:07.940510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.869 qpair failed and we were unable to recover it. 00:38:06.869 [2024-12-13 03:49:07.940705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.869 [2024-12-13 03:49:07.940746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.869 qpair failed and we were unable to recover it. 00:38:06.869 [2024-12-13 03:49:07.940941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.869 [2024-12-13 03:49:07.940986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.869 qpair failed and we were unable to recover it. 00:38:06.869 [2024-12-13 03:49:07.941207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.869 [2024-12-13 03:49:07.941248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.869 qpair failed and we were unable to recover it. 00:38:06.869 [2024-12-13 03:49:07.941446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.869 [2024-12-13 03:49:07.941488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.869 qpair failed and we were unable to recover it. 00:38:06.869 [2024-12-13 03:49:07.941689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.869 [2024-12-13 03:49:07.941731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.869 qpair failed and we were unable to recover it. 00:38:06.869 [2024-12-13 03:49:07.942026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.869 [2024-12-13 03:49:07.942040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.869 qpair failed and we were unable to recover it. 00:38:06.869 [2024-12-13 03:49:07.942211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.869 [2024-12-13 03:49:07.942226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.869 qpair failed and we were unable to recover it. 00:38:06.869 [2024-12-13 03:49:07.942454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.869 [2024-12-13 03:49:07.942496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.869 qpair failed and we were unable to recover it. 00:38:06.869 [2024-12-13 03:49:07.942651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.869 [2024-12-13 03:49:07.942667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.869 qpair failed and we were unable to recover it. 00:38:06.869 [2024-12-13 03:49:07.942854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.869 [2024-12-13 03:49:07.942896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.869 qpair failed and we were unable to recover it. 00:38:06.869 [2024-12-13 03:49:07.943096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.869 [2024-12-13 03:49:07.943138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.869 qpair failed and we were unable to recover it. 00:38:06.869 [2024-12-13 03:49:07.943282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.869 [2024-12-13 03:49:07.943326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.869 qpair failed and we were unable to recover it. 00:38:06.869 [2024-12-13 03:49:07.943478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.869 [2024-12-13 03:49:07.943491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.869 qpair failed and we were unable to recover it. 00:38:06.869 [2024-12-13 03:49:07.943558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.869 [2024-12-13 03:49:07.943570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.869 qpair failed and we were unable to recover it. 00:38:06.869 [2024-12-13 03:49:07.943711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.869 [2024-12-13 03:49:07.943725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.869 qpair failed and we were unable to recover it. 00:38:06.869 [2024-12-13 03:49:07.943819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.869 [2024-12-13 03:49:07.943832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.869 qpair failed and we were unable to recover it. 00:38:06.869 [2024-12-13 03:49:07.943909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.869 [2024-12-13 03:49:07.943928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.869 qpair failed and we were unable to recover it. 00:38:06.869 [2024-12-13 03:49:07.944014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.869 [2024-12-13 03:49:07.944027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.869 qpair failed and we were unable to recover it. 00:38:06.869 [2024-12-13 03:49:07.944108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.869 [2024-12-13 03:49:07.944122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.869 qpair failed and we were unable to recover it. 00:38:06.869 [2024-12-13 03:49:07.944202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.869 [2024-12-13 03:49:07.944215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.869 qpair failed and we were unable to recover it. 00:38:06.869 [2024-12-13 03:49:07.944297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.869 [2024-12-13 03:49:07.944310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.869 qpair failed and we were unable to recover it. 00:38:06.869 [2024-12-13 03:49:07.944396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.869 [2024-12-13 03:49:07.944409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.869 qpair failed and we were unable to recover it. 00:38:06.869 [2024-12-13 03:49:07.944554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.869 [2024-12-13 03:49:07.944567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.869 qpair failed and we were unable to recover it. 00:38:06.869 [2024-12-13 03:49:07.944702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.869 [2024-12-13 03:49:07.944715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.869 qpair failed and we were unable to recover it. 00:38:06.869 [2024-12-13 03:49:07.944850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.869 [2024-12-13 03:49:07.944862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.869 qpair failed and we were unable to recover it. 00:38:06.869 [2024-12-13 03:49:07.944928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.869 [2024-12-13 03:49:07.944941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.869 qpair failed and we were unable to recover it. 00:38:06.869 [2024-12-13 03:49:07.945024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.869 [2024-12-13 03:49:07.945036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.869 qpair failed and we were unable to recover it. 00:38:06.869 [2024-12-13 03:49:07.945132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.869 [2024-12-13 03:49:07.945146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.869 qpair failed and we were unable to recover it. 00:38:06.869 [2024-12-13 03:49:07.945222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.869 [2024-12-13 03:49:07.945242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.869 qpair failed and we were unable to recover it. 00:38:06.869 [2024-12-13 03:49:07.945406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.869 [2024-12-13 03:49:07.945419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.869 qpair failed and we were unable to recover it. 00:38:06.869 [2024-12-13 03:49:07.945643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.869 [2024-12-13 03:49:07.945657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.869 qpair failed and we were unable to recover it. 00:38:06.869 [2024-12-13 03:49:07.945726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.869 [2024-12-13 03:49:07.945738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.869 qpair failed and we were unable to recover it. 00:38:06.869 [2024-12-13 03:49:07.945819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.869 [2024-12-13 03:49:07.945833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.869 qpair failed and we were unable to recover it. 00:38:06.869 [2024-12-13 03:49:07.946034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.946047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.946120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.946132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.946296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.946310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.946406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.946419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.946549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.946563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.946650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.946663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.946752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.946765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.946850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.946863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.946993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.947006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.947080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.947094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.947171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.947184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.947330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.947345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.947442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.947455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.947537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.947551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.947647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.947660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.947749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.947766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.947836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.947849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.947922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.947935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.948003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.948015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.948083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.948096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.948240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.948253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.948322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.948334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.948414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.948427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.948491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.948504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.948578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.948590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.948735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.948749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.948898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.948911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.949013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.949027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.949097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.949110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.949179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.949193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.949339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.949355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.949438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.949451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.949600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.949614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.949678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.949691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.949786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.949799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.949874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.949887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.950030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.950044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.950145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.950161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.950319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.870 [2024-12-13 03:49:07.950333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.870 qpair failed and we were unable to recover it. 00:38:06.870 [2024-12-13 03:49:07.950398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.950411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.950494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.950507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.950587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.950601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.950801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.950815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.950895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.950909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.950984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.950996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.951087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.951100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.951237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.951249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.951426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.951439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.951624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.951637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.951796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.951809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.951874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.951886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.952027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.952041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.952128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.952142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.952224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.952237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.952378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.952391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.952469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.952484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.952617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.952631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.952777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.952790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.953041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.953057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.953149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.953166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.953307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.953320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.953402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.953416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.953499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.953512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.953587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.953600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.953742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.953755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.953841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.953855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.953956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.953969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.954111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.954124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.954193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.954205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.954356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.954369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.954445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.954457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.954598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.954612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.954698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.954711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.954846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.954859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.954955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.954969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.955106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.955119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.955208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.871 [2024-12-13 03:49:07.955222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.871 qpair failed and we were unable to recover it. 00:38:06.871 [2024-12-13 03:49:07.955355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.872 [2024-12-13 03:49:07.955368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.872 qpair failed and we were unable to recover it. 00:38:06.872 [2024-12-13 03:49:07.955434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.872 [2024-12-13 03:49:07.955446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.872 qpair failed and we were unable to recover it. 00:38:06.872 [2024-12-13 03:49:07.955543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.872 [2024-12-13 03:49:07.955557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.872 qpair failed and we were unable to recover it. 00:38:06.872 [2024-12-13 03:49:07.955710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.872 [2024-12-13 03:49:07.955725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.872 qpair failed and we were unable to recover it. 00:38:06.872 [2024-12-13 03:49:07.955792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.872 [2024-12-13 03:49:07.955803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.872 qpair failed and we were unable to recover it. 00:38:06.872 [2024-12-13 03:49:07.955955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.872 [2024-12-13 03:49:07.955970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.872 qpair failed and we were unable to recover it. 00:38:06.872 [2024-12-13 03:49:07.956150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.872 [2024-12-13 03:49:07.956192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.872 qpair failed and we were unable to recover it. 00:38:06.872 [2024-12-13 03:49:07.956317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.872 [2024-12-13 03:49:07.956357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.872 qpair failed and we were unable to recover it. 00:38:06.872 [2024-12-13 03:49:07.956559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.872 [2024-12-13 03:49:07.956601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.872 qpair failed and we were unable to recover it. 00:38:06.872 [2024-12-13 03:49:07.956733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.872 [2024-12-13 03:49:07.956745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.872 qpair failed and we were unable to recover it. 00:38:06.872 [2024-12-13 03:49:07.956936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.872 [2024-12-13 03:49:07.956950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.872 qpair failed and we were unable to recover it. 00:38:06.872 [2024-12-13 03:49:07.957038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.872 [2024-12-13 03:49:07.957050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.872 qpair failed and we were unable to recover it. 00:38:06.872 [2024-12-13 03:49:07.957228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.872 [2024-12-13 03:49:07.957241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.872 qpair failed and we were unable to recover it. 00:38:06.872 [2024-12-13 03:49:07.957479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.872 [2024-12-13 03:49:07.957520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.872 qpair failed and we were unable to recover it. 00:38:06.872 [2024-12-13 03:49:07.957666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.872 [2024-12-13 03:49:07.957707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.872 qpair failed and we were unable to recover it. 00:38:06.872 [2024-12-13 03:49:07.957980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.872 [2024-12-13 03:49:07.958016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.872 qpair failed and we were unable to recover it. 00:38:06.872 [2024-12-13 03:49:07.958152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.872 [2024-12-13 03:49:07.958164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.872 qpair failed and we were unable to recover it. 00:38:06.872 [2024-12-13 03:49:07.958362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.872 [2024-12-13 03:49:07.958376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.872 qpair failed and we were unable to recover it. 00:38:06.872 [2024-12-13 03:49:07.958462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.872 [2024-12-13 03:49:07.958478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.872 qpair failed and we were unable to recover it. 00:38:06.872 [2024-12-13 03:49:07.958564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.872 [2024-12-13 03:49:07.958577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.872 qpair failed and we were unable to recover it. 00:38:06.872 [2024-12-13 03:49:07.958680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.872 [2024-12-13 03:49:07.958693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.872 qpair failed and we were unable to recover it. 00:38:06.872 [2024-12-13 03:49:07.958933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.872 [2024-12-13 03:49:07.958946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.872 qpair failed and we were unable to recover it. 00:38:06.872 [2024-12-13 03:49:07.959097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.872 [2024-12-13 03:49:07.959111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.872 qpair failed and we were unable to recover it. 00:38:06.872 [2024-12-13 03:49:07.959206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.872 [2024-12-13 03:49:07.959220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.872 qpair failed and we were unable to recover it. 00:38:06.872 [2024-12-13 03:49:07.959289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.872 [2024-12-13 03:49:07.959302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.872 qpair failed and we were unable to recover it. 00:38:06.872 [2024-12-13 03:49:07.959365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.872 [2024-12-13 03:49:07.959378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.872 qpair failed and we were unable to recover it. 00:38:06.872 [2024-12-13 03:49:07.959482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.872 [2024-12-13 03:49:07.959496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.872 qpair failed and we were unable to recover it. 00:38:06.872 [2024-12-13 03:49:07.959566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.872 [2024-12-13 03:49:07.959579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.872 qpair failed and we were unable to recover it. 00:38:06.872 [2024-12-13 03:49:07.959658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.872 [2024-12-13 03:49:07.959671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.872 qpair failed and we were unable to recover it. 00:38:06.872 [2024-12-13 03:49:07.959819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.872 [2024-12-13 03:49:07.959832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.872 qpair failed and we were unable to recover it. 00:38:06.872 [2024-12-13 03:49:07.959964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.959977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.873 qpair failed and we were unable to recover it. 00:38:06.873 [2024-12-13 03:49:07.960111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.960124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.873 qpair failed and we were unable to recover it. 00:38:06.873 [2024-12-13 03:49:07.960261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.960275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.873 qpair failed and we were unable to recover it. 00:38:06.873 [2024-12-13 03:49:07.960363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.960376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.873 qpair failed and we were unable to recover it. 00:38:06.873 [2024-12-13 03:49:07.960549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.960561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.873 qpair failed and we were unable to recover it. 00:38:06.873 [2024-12-13 03:49:07.960646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.960659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.873 qpair failed and we were unable to recover it. 00:38:06.873 [2024-12-13 03:49:07.960750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.960763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.873 qpair failed and we were unable to recover it. 00:38:06.873 [2024-12-13 03:49:07.960901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.960914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.873 qpair failed and we were unable to recover it. 00:38:06.873 [2024-12-13 03:49:07.961004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.961017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.873 qpair failed and we were unable to recover it. 00:38:06.873 [2024-12-13 03:49:07.961240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.961253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.873 qpair failed and we were unable to recover it. 00:38:06.873 [2024-12-13 03:49:07.961319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.961331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.873 qpair failed and we were unable to recover it. 00:38:06.873 [2024-12-13 03:49:07.961480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.961493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.873 qpair failed and we were unable to recover it. 00:38:06.873 [2024-12-13 03:49:07.961571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.961584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.873 qpair failed and we were unable to recover it. 00:38:06.873 [2024-12-13 03:49:07.961656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.961669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.873 qpair failed and we were unable to recover it. 00:38:06.873 [2024-12-13 03:49:07.961769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.961782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.873 qpair failed and we were unable to recover it. 00:38:06.873 [2024-12-13 03:49:07.962012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.962027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.873 qpair failed and we were unable to recover it. 00:38:06.873 [2024-12-13 03:49:07.962116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.962134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.873 qpair failed and we were unable to recover it. 00:38:06.873 [2024-12-13 03:49:07.962284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.962326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.873 qpair failed and we were unable to recover it. 00:38:06.873 [2024-12-13 03:49:07.962470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.962513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.873 qpair failed and we were unable to recover it. 00:38:06.873 [2024-12-13 03:49:07.962655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.962695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.873 qpair failed and we were unable to recover it. 00:38:06.873 [2024-12-13 03:49:07.962901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.962915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.873 qpair failed and we were unable to recover it. 00:38:06.873 [2024-12-13 03:49:07.963050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.963098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.873 qpair failed and we were unable to recover it. 00:38:06.873 [2024-12-13 03:49:07.963289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.963330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.873 qpair failed and we were unable to recover it. 00:38:06.873 [2024-12-13 03:49:07.963525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.963566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.873 qpair failed and we were unable to recover it. 00:38:06.873 [2024-12-13 03:49:07.963740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.963754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.873 qpair failed and we were unable to recover it. 00:38:06.873 [2024-12-13 03:49:07.963973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.964017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.873 qpair failed and we were unable to recover it. 00:38:06.873 [2024-12-13 03:49:07.964235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.964277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.873 qpair failed and we were unable to recover it. 00:38:06.873 [2024-12-13 03:49:07.964539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.964581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.873 qpair failed and we were unable to recover it. 00:38:06.873 [2024-12-13 03:49:07.964790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.964843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.873 qpair failed and we were unable to recover it. 00:38:06.873 [2024-12-13 03:49:07.964993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.965035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.873 qpair failed and we were unable to recover it. 00:38:06.873 [2024-12-13 03:49:07.965298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.965342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.873 qpair failed and we were unable to recover it. 00:38:06.873 [2024-12-13 03:49:07.965597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.965639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.873 qpair failed and we were unable to recover it. 00:38:06.873 [2024-12-13 03:49:07.965896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.965951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.873 qpair failed and we were unable to recover it. 00:38:06.873 [2024-12-13 03:49:07.966107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.966120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.873 qpair failed and we were unable to recover it. 00:38:06.873 [2024-12-13 03:49:07.966288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.966301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.873 qpair failed and we were unable to recover it. 00:38:06.873 [2024-12-13 03:49:07.966448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.966461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.873 qpair failed and we were unable to recover it. 00:38:06.873 [2024-12-13 03:49:07.966562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.966575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.873 qpair failed and we were unable to recover it. 00:38:06.873 [2024-12-13 03:49:07.966715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.873 [2024-12-13 03:49:07.966729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.966871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.966884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.967021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.967035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.967118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.967131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.967271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.967284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.967369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.967382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.967535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.967548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.967645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.967658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.967743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.967756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.967895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.967909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.968048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.968062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.968158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.968170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.968317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.968330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.968410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.968423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.968576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.968589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.968728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.968743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.968823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.968836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.969036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.969050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.969228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.969271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.969426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.969468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.969677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.969718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.969848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.969881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.970030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.970043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.970120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.970133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.970283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.970297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.970499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.970513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.970714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.970727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.970882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.970896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.971052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.971066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.971224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.971238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.971331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.971344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.971573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.971590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.971727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.971740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.971888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.971901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.972050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.972064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.972149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.972161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.972364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.972400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.972600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.972641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.972832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.972872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.973014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.973029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.973123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.874 [2024-12-13 03:49:07.973135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.874 qpair failed and we were unable to recover it. 00:38:06.874 [2024-12-13 03:49:07.973301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.973314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.973478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.973491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.973573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.973590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.973661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.973673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.973829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.973841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.973984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.973996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.974069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.974081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.974216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.974229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.974308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.974319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.974546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.974559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.974706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.974718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.974803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.974814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.975009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.975021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.975225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.975238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.975311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.975323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.975546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.975559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.975639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.975650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.975755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.975768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.975905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.975922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.976021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.976035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.976180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.976192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.976331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.976343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.976490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.976503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.976593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.976606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.976778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.976790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.976932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.976944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.977042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.977055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.977141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.977152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.977317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.977329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.977405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.977417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.977516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.977532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.977666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.977679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.977837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.977849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.977995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.978008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.978081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.978094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.978256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.978268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.978334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.978346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.978430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.978443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.875 [2024-12-13 03:49:07.978509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.875 [2024-12-13 03:49:07.978521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.875 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.978725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.978738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.978821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.978834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.978988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.979002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.979089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.979102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.979309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.979322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.979458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.979470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.979669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.979682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.979782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.979794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.979872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.979884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.980047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.980060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.980195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.980207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.980292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.980305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.980392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.980404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.980494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.980506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.980656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.980667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.980774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.980787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.980929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.980942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.981190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.981203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.981334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.981377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.981535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.981579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.981727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.981769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.982015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.982029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.982102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.982114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.982266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.982279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.982371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.982383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.982607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.982620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.982759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.982772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.982851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.982864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.983069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.983083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.983243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.983257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.983344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.983361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.983513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.983529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.983620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.983633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.983711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.983724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.983805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.983819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.983909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.983929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.984070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.984084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.984180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.984192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.984338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.984351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.984491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.984504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.984585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.984598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.876 qpair failed and we were unable to recover it. 00:38:06.876 [2024-12-13 03:49:07.984681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.876 [2024-12-13 03:49:07.984693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.877 qpair failed and we were unable to recover it. 00:38:06.877 [2024-12-13 03:49:07.984835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.877 [2024-12-13 03:49:07.984849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.877 qpair failed and we were unable to recover it. 00:38:06.877 [2024-12-13 03:49:07.984940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.877 [2024-12-13 03:49:07.984953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.877 qpair failed and we were unable to recover it. 00:38:06.877 [2024-12-13 03:49:07.985037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.877 [2024-12-13 03:49:07.985050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.877 qpair failed and we were unable to recover it. 00:38:06.877 [2024-12-13 03:49:07.985121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.877 [2024-12-13 03:49:07.985133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.877 qpair failed and we were unable to recover it. 00:38:06.877 [2024-12-13 03:49:07.985289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.877 [2024-12-13 03:49:07.985303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.877 qpair failed and we were unable to recover it. 00:38:06.877 [2024-12-13 03:49:07.985454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.877 [2024-12-13 03:49:07.985467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.877 qpair failed and we were unable to recover it. 00:38:06.877 [2024-12-13 03:49:07.985667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.877 [2024-12-13 03:49:07.985681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.877 qpair failed and we were unable to recover it. 00:38:06.877 [2024-12-13 03:49:07.985826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.877 [2024-12-13 03:49:07.985868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.877 qpair failed and we were unable to recover it. 00:38:06.877 [2024-12-13 03:49:07.986119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.877 [2024-12-13 03:49:07.986168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.877 qpair failed and we were unable to recover it. 00:38:06.877 [2024-12-13 03:49:07.986298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.877 [2024-12-13 03:49:07.986340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.877 qpair failed and we were unable to recover it. 00:38:06.877 [2024-12-13 03:49:07.986639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.877 [2024-12-13 03:49:07.986680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.877 qpair failed and we were unable to recover it. 00:38:06.877 [2024-12-13 03:49:07.986816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.877 [2024-12-13 03:49:07.986858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.877 qpair failed and we were unable to recover it. 00:38:06.877 [2024-12-13 03:49:07.987099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.877 [2024-12-13 03:49:07.987141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.877 qpair failed and we were unable to recover it. 00:38:06.877 [2024-12-13 03:49:07.987399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.877 [2024-12-13 03:49:07.987441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.877 qpair failed and we were unable to recover it. 00:38:06.877 [2024-12-13 03:49:07.987586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.877 [2024-12-13 03:49:07.987627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.877 qpair failed and we were unable to recover it. 00:38:06.877 [2024-12-13 03:49:07.987782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.877 [2024-12-13 03:49:07.987825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.877 qpair failed and we were unable to recover it. 00:38:06.877 [2024-12-13 03:49:07.988096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.877 [2024-12-13 03:49:07.988127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.877 qpair failed and we were unable to recover it. 00:38:06.877 [2024-12-13 03:49:07.988323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.877 [2024-12-13 03:49:07.988358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.877 qpair failed and we were unable to recover it. 00:38:06.877 [2024-12-13 03:49:07.988535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.877 [2024-12-13 03:49:07.988565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.877 qpair failed and we were unable to recover it. 00:38:06.877 [2024-12-13 03:49:07.988654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.877 [2024-12-13 03:49:07.988670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.877 qpair failed and we were unable to recover it. 00:38:06.877 [2024-12-13 03:49:07.988773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.877 [2024-12-13 03:49:07.988786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.877 qpair failed and we were unable to recover it. 00:38:06.877 [2024-12-13 03:49:07.988966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.877 [2024-12-13 03:49:07.988980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.877 qpair failed and we were unable to recover it. 00:38:06.877 [2024-12-13 03:49:07.989120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.877 [2024-12-13 03:49:07.989163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.877 qpair failed and we were unable to recover it. 00:38:06.877 [2024-12-13 03:49:07.989423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.877 [2024-12-13 03:49:07.989466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.877 qpair failed and we were unable to recover it. 00:38:06.877 [2024-12-13 03:49:07.989669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.878 [2024-12-13 03:49:07.989710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.878 qpair failed and we were unable to recover it. 00:38:06.878 [2024-12-13 03:49:07.989904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.878 [2024-12-13 03:49:07.989957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.878 qpair failed and we were unable to recover it. 00:38:06.878 [2024-12-13 03:49:07.990186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.878 [2024-12-13 03:49:07.990228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.878 qpair failed and we were unable to recover it. 00:38:06.878 [2024-12-13 03:49:07.990368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.878 [2024-12-13 03:49:07.990409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.878 qpair failed and we were unable to recover it. 00:38:06.878 [2024-12-13 03:49:07.990616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.878 [2024-12-13 03:49:07.990656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.878 qpair failed and we were unable to recover it. 00:38:06.878 [2024-12-13 03:49:07.990788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.878 [2024-12-13 03:49:07.990829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.878 qpair failed and we were unable to recover it. 00:38:06.878 [2024-12-13 03:49:07.991070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.878 [2024-12-13 03:49:07.991094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.878 qpair failed and we were unable to recover it. 00:38:06.878 [2024-12-13 03:49:07.991263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.878 [2024-12-13 03:49:07.991284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.878 qpair failed and we were unable to recover it. 00:38:06.878 [2024-12-13 03:49:07.991454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.878 [2024-12-13 03:49:07.991498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.878 qpair failed and we were unable to recover it. 00:38:06.878 [2024-12-13 03:49:07.991694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.878 [2024-12-13 03:49:07.991736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.878 qpair failed and we were unable to recover it. 00:38:06.878 [2024-12-13 03:49:07.991877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.878 [2024-12-13 03:49:07.991942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.878 qpair failed and we were unable to recover it. 00:38:06.878 [2024-12-13 03:49:07.992108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.878 [2024-12-13 03:49:07.992128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.878 qpair failed and we were unable to recover it. 00:38:06.878 [2024-12-13 03:49:07.992311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.878 [2024-12-13 03:49:07.992327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.878 qpair failed and we were unable to recover it. 00:38:06.878 [2024-12-13 03:49:07.992418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.878 [2024-12-13 03:49:07.992432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.878 qpair failed and we were unable to recover it. 00:38:06.878 [2024-12-13 03:49:07.992590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.878 [2024-12-13 03:49:07.992603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.878 qpair failed and we were unable to recover it. 00:38:06.878 [2024-12-13 03:49:07.992759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.878 [2024-12-13 03:49:07.992801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.878 qpair failed and we were unable to recover it. 00:38:06.878 [2024-12-13 03:49:07.993010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.878 [2024-12-13 03:49:07.993052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.878 qpair failed and we were unable to recover it. 00:38:06.878 [2024-12-13 03:49:07.993270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.878 [2024-12-13 03:49:07.993313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.878 qpair failed and we were unable to recover it. 00:38:06.878 [2024-12-13 03:49:07.993526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.878 [2024-12-13 03:49:07.993567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.878 qpair failed and we were unable to recover it. 00:38:06.878 [2024-12-13 03:49:07.993831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.878 [2024-12-13 03:49:07.993874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.878 qpair failed and we were unable to recover it. 00:38:06.878 [2024-12-13 03:49:07.993969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.878 [2024-12-13 03:49:07.993982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.878 qpair failed and we were unable to recover it. 00:38:06.878 [2024-12-13 03:49:07.994062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.878 [2024-12-13 03:49:07.994075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.878 qpair failed and we were unable to recover it. 00:38:06.878 [2024-12-13 03:49:07.994225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.878 [2024-12-13 03:49:07.994239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.878 qpair failed and we were unable to recover it. 00:38:06.878 [2024-12-13 03:49:07.994332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.878 [2024-12-13 03:49:07.994345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.878 qpair failed and we were unable to recover it. 00:38:06.878 [2024-12-13 03:49:07.994437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.878 [2024-12-13 03:49:07.994449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.878 qpair failed and we were unable to recover it. 00:38:06.878 [2024-12-13 03:49:07.994540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.878 [2024-12-13 03:49:07.994553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.878 qpair failed and we were unable to recover it. 00:38:06.878 [2024-12-13 03:49:07.994744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.878 [2024-12-13 03:49:07.994757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.878 qpair failed and we were unable to recover it. 00:38:06.878 [2024-12-13 03:49:07.994839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.878 [2024-12-13 03:49:07.994853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.878 qpair failed and we were unable to recover it. 00:38:06.878 [2024-12-13 03:49:07.994942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.878 [2024-12-13 03:49:07.994955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.878 qpair failed and we were unable to recover it. 00:38:06.878 [2024-12-13 03:49:07.995094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.878 [2024-12-13 03:49:07.995108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.878 qpair failed and we were unable to recover it. 00:38:06.878 [2024-12-13 03:49:07.995194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.878 [2024-12-13 03:49:07.995206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.879 qpair failed and we were unable to recover it. 00:38:06.879 [2024-12-13 03:49:07.995290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.879 [2024-12-13 03:49:07.995303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.879 qpair failed and we were unable to recover it. 00:38:06.879 [2024-12-13 03:49:07.995556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.879 [2024-12-13 03:49:07.995573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.879 qpair failed and we were unable to recover it. 00:38:06.879 [2024-12-13 03:49:07.995720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.879 [2024-12-13 03:49:07.995733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.879 qpair failed and we were unable to recover it. 00:38:06.879 [2024-12-13 03:49:07.995866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.879 [2024-12-13 03:49:07.995879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.879 qpair failed and we were unable to recover it. 00:38:06.879 [2024-12-13 03:49:07.995954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.879 [2024-12-13 03:49:07.995969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.879 qpair failed and we were unable to recover it. 00:38:06.879 [2024-12-13 03:49:07.996181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.879 [2024-12-13 03:49:07.996195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.879 qpair failed and we were unable to recover it. 00:38:06.879 [2024-12-13 03:49:07.996292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.879 [2024-12-13 03:49:07.996311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.879 qpair failed and we were unable to recover it. 00:38:06.879 [2024-12-13 03:49:07.996384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.879 [2024-12-13 03:49:07.996397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.879 qpair failed and we were unable to recover it. 00:38:06.879 [2024-12-13 03:49:07.996603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.879 [2024-12-13 03:49:07.996643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.879 qpair failed and we were unable to recover it. 00:38:06.879 [2024-12-13 03:49:07.996789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.879 [2024-12-13 03:49:07.996830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.879 qpair failed and we were unable to recover it. 00:38:06.879 [2024-12-13 03:49:07.996978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.879 [2024-12-13 03:49:07.997028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.879 qpair failed and we were unable to recover it. 00:38:06.879 [2024-12-13 03:49:07.997188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.879 [2024-12-13 03:49:07.997235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.879 qpair failed and we were unable to recover it. 00:38:06.879 [2024-12-13 03:49:07.997471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.879 [2024-12-13 03:49:07.997531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.879 qpair failed and we were unable to recover it. 00:38:06.879 [2024-12-13 03:49:07.997749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.879 [2024-12-13 03:49:07.997793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.879 qpair failed and we were unable to recover it. 00:38:06.879 [2024-12-13 03:49:07.997902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.879 [2024-12-13 03:49:07.997916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.879 qpair failed and we were unable to recover it. 00:38:06.879 [2024-12-13 03:49:07.998062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.879 [2024-12-13 03:49:07.998076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.879 qpair failed and we were unable to recover it. 00:38:06.879 [2024-12-13 03:49:07.998242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.879 [2024-12-13 03:49:07.998255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.879 qpair failed and we were unable to recover it. 00:38:06.879 [2024-12-13 03:49:07.998417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.879 [2024-12-13 03:49:07.998459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.879 qpair failed and we were unable to recover it. 00:38:06.879 [2024-12-13 03:49:07.998608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.879 [2024-12-13 03:49:07.998649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.879 qpair failed and we were unable to recover it. 00:38:06.879 [2024-12-13 03:49:07.998957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.879 [2024-12-13 03:49:07.998997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.879 qpair failed and we were unable to recover it. 00:38:06.879 [2024-12-13 03:49:07.999222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.879 [2024-12-13 03:49:07.999236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.879 qpair failed and we were unable to recover it. 00:38:06.879 [2024-12-13 03:49:07.999374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.879 [2024-12-13 03:49:07.999388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.879 qpair failed and we were unable to recover it. 00:38:06.879 [2024-12-13 03:49:07.999524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.879 [2024-12-13 03:49:07.999538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.879 qpair failed and we were unable to recover it. 00:38:06.879 [2024-12-13 03:49:07.999733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.879 [2024-12-13 03:49:07.999784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:06.879 qpair failed and we were unable to recover it. 00:38:06.879 [2024-12-13 03:49:07.999936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.879 [2024-12-13 03:49:07.999982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.879 qpair failed and we were unable to recover it. 00:38:06.879 [2024-12-13 03:49:08.000134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.879 [2024-12-13 03:49:08.000189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:06.879 qpair failed and we were unable to recover it. 00:38:06.879 [2024-12-13 03:49:08.000405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.879 [2024-12-13 03:49:08.000450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.879 qpair failed and we were unable to recover it. 00:38:06.879 [2024-12-13 03:49:08.000711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.879 [2024-12-13 03:49:08.000725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.879 qpair failed and we were unable to recover it. 00:38:06.879 [2024-12-13 03:49:08.000798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.879 [2024-12-13 03:49:08.000812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.879 qpair failed and we were unable to recover it. 00:38:06.879 [2024-12-13 03:49:08.001032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.879 [2024-12-13 03:49:08.001046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.879 qpair failed and we were unable to recover it. 00:38:06.879 [2024-12-13 03:49:08.001135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.879 [2024-12-13 03:49:08.001149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.879 qpair failed and we were unable to recover it. 00:38:06.880 [2024-12-13 03:49:08.001351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.880 [2024-12-13 03:49:08.001364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.880 qpair failed and we were unable to recover it. 00:38:06.880 [2024-12-13 03:49:08.001514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.880 [2024-12-13 03:49:08.001529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.880 qpair failed and we were unable to recover it. 00:38:06.880 [2024-12-13 03:49:08.001604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.880 [2024-12-13 03:49:08.001618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.880 qpair failed and we were unable to recover it. 00:38:06.880 [2024-12-13 03:49:08.001714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.880 [2024-12-13 03:49:08.001728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.880 qpair failed and we were unable to recover it. 00:38:06.880 [2024-12-13 03:49:08.001810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.880 [2024-12-13 03:49:08.001824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.880 qpair failed and we were unable to recover it. 00:38:06.880 [2024-12-13 03:49:08.001904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.880 [2024-12-13 03:49:08.001923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.880 qpair failed and we were unable to recover it. 00:38:06.880 [2024-12-13 03:49:08.001986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.880 [2024-12-13 03:49:08.001999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.880 qpair failed and we were unable to recover it. 00:38:06.880 [2024-12-13 03:49:08.002168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.880 [2024-12-13 03:49:08.002181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.880 qpair failed and we were unable to recover it. 00:38:06.880 [2024-12-13 03:49:08.002385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.880 [2024-12-13 03:49:08.002399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.880 qpair failed and we were unable to recover it. 00:38:06.880 [2024-12-13 03:49:08.002476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.880 [2024-12-13 03:49:08.002489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.880 qpair failed and we were unable to recover it. 00:38:06.880 [2024-12-13 03:49:08.002569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.880 [2024-12-13 03:49:08.002585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.880 qpair failed and we were unable to recover it. 00:38:06.880 [2024-12-13 03:49:08.002675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.880 [2024-12-13 03:49:08.002689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.880 qpair failed and we were unable to recover it. 00:38:06.880 [2024-12-13 03:49:08.002876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.880 [2024-12-13 03:49:08.002890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.880 qpair failed and we were unable to recover it. 00:38:06.880 [2024-12-13 03:49:08.003037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.880 [2024-12-13 03:49:08.003050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.880 qpair failed and we were unable to recover it. 00:38:06.880 [2024-12-13 03:49:08.003138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.880 [2024-12-13 03:49:08.003152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.880 qpair failed and we were unable to recover it. 00:38:06.880 [2024-12-13 03:49:08.003237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.880 [2024-12-13 03:49:08.003250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.880 qpair failed and we were unable to recover it. 00:38:06.880 [2024-12-13 03:49:08.003336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.880 [2024-12-13 03:49:08.003349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.880 qpair failed and we were unable to recover it. 00:38:06.880 [2024-12-13 03:49:08.003439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.880 [2024-12-13 03:49:08.003452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.880 qpair failed and we were unable to recover it. 00:38:06.880 [2024-12-13 03:49:08.003534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.880 [2024-12-13 03:49:08.003547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.880 qpair failed and we were unable to recover it. 00:38:06.880 [2024-12-13 03:49:08.003686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.880 [2024-12-13 03:49:08.003700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.880 qpair failed and we were unable to recover it. 00:38:06.880 [2024-12-13 03:49:08.003875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.880 [2024-12-13 03:49:08.003889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.880 qpair failed and we were unable to recover it. 00:38:06.880 [2024-12-13 03:49:08.004041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.880 [2024-12-13 03:49:08.004054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.880 qpair failed and we were unable to recover it. 00:38:06.880 [2024-12-13 03:49:08.004129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.880 [2024-12-13 03:49:08.004142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.880 qpair failed and we were unable to recover it. 00:38:06.880 [2024-12-13 03:49:08.004224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.880 [2024-12-13 03:49:08.004237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.880 qpair failed and we were unable to recover it. 00:38:06.880 [2024-12-13 03:49:08.004344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.880 [2024-12-13 03:49:08.004357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.880 qpair failed and we were unable to recover it. 00:38:06.880 [2024-12-13 03:49:08.004446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.880 [2024-12-13 03:49:08.004459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.880 qpair failed and we were unable to recover it. 00:38:06.880 [2024-12-13 03:49:08.004591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.880 [2024-12-13 03:49:08.004605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.880 qpair failed and we were unable to recover it. 00:38:06.880 [2024-12-13 03:49:08.004672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.880 [2024-12-13 03:49:08.004685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.880 qpair failed and we were unable to recover it. 00:38:06.880 [2024-12-13 03:49:08.004750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.880 [2024-12-13 03:49:08.004762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.880 qpair failed and we were unable to recover it. 00:38:06.880 [2024-12-13 03:49:08.004926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.880 [2024-12-13 03:49:08.004940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.880 qpair failed and we were unable to recover it. 00:38:06.880 [2024-12-13 03:49:08.005082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.880 [2024-12-13 03:49:08.005095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.880 qpair failed and we were unable to recover it. 00:38:06.880 [2024-12-13 03:49:08.005275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.880 [2024-12-13 03:49:08.005289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.880 qpair failed and we were unable to recover it. 00:38:06.880 [2024-12-13 03:49:08.005423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.880 [2024-12-13 03:49:08.005437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.880 qpair failed and we were unable to recover it. 00:38:06.880 [2024-12-13 03:49:08.005496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.880 [2024-12-13 03:49:08.005508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.881 qpair failed and we were unable to recover it. 00:38:06.881 [2024-12-13 03:49:08.005672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.881 [2024-12-13 03:49:08.005686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.881 qpair failed and we were unable to recover it. 00:38:06.881 [2024-12-13 03:49:08.005833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.881 [2024-12-13 03:49:08.005887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.881 qpair failed and we were unable to recover it. 00:38:06.881 [2024-12-13 03:49:08.006117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.881 [2024-12-13 03:49:08.006161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.881 qpair failed and we were unable to recover it. 00:38:06.881 [2024-12-13 03:49:08.006330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.881 [2024-12-13 03:49:08.006372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.881 qpair failed and we were unable to recover it. 00:38:06.881 [2024-12-13 03:49:08.006508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.881 [2024-12-13 03:49:08.006549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.881 qpair failed and we were unable to recover it. 00:38:06.881 [2024-12-13 03:49:08.006776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.881 [2024-12-13 03:49:08.006799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.881 qpair failed and we were unable to recover it. 00:38:06.881 [2024-12-13 03:49:08.007017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.881 [2024-12-13 03:49:08.007040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.881 qpair failed and we were unable to recover it. 00:38:06.881 [2024-12-13 03:49:08.007272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.881 [2024-12-13 03:49:08.007293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:06.881 qpair failed and we were unable to recover it. 00:38:06.881 [2024-12-13 03:49:08.007391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.881 [2024-12-13 03:49:08.007406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.881 qpair failed and we were unable to recover it. 00:38:06.881 [2024-12-13 03:49:08.007564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.881 [2024-12-13 03:49:08.007577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.881 qpair failed and we were unable to recover it. 00:38:06.881 [2024-12-13 03:49:08.007671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.881 [2024-12-13 03:49:08.007684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.881 qpair failed and we were unable to recover it. 00:38:06.881 [2024-12-13 03:49:08.007816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.881 [2024-12-13 03:49:08.007829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.881 qpair failed and we were unable to recover it. 00:38:06.881 [2024-12-13 03:49:08.007977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.881 [2024-12-13 03:49:08.007991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.881 qpair failed and we were unable to recover it. 00:38:06.881 [2024-12-13 03:49:08.008105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.881 [2024-12-13 03:49:08.008120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.881 qpair failed and we were unable to recover it. 00:38:06.881 [2024-12-13 03:49:08.008271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.881 [2024-12-13 03:49:08.008289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.881 qpair failed and we were unable to recover it. 00:38:06.881 [2024-12-13 03:49:08.008371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.881 [2024-12-13 03:49:08.008385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.881 qpair failed and we were unable to recover it. 00:38:06.881 [2024-12-13 03:49:08.008476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.881 [2024-12-13 03:49:08.008491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.881 qpair failed and we were unable to recover it. 00:38:06.881 [2024-12-13 03:49:08.008570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.881 [2024-12-13 03:49:08.008584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.881 qpair failed and we were unable to recover it. 00:38:06.881 [2024-12-13 03:49:08.008754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.881 [2024-12-13 03:49:08.008767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.881 qpair failed and we were unable to recover it. 00:38:06.881 [2024-12-13 03:49:08.008915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.881 [2024-12-13 03:49:08.008934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.881 qpair failed and we were unable to recover it. 00:38:06.881 [2024-12-13 03:49:08.008998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.881 [2024-12-13 03:49:08.009011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.881 qpair failed and we were unable to recover it. 00:38:06.881 [2024-12-13 03:49:08.009144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.881 [2024-12-13 03:49:08.009158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.881 qpair failed and we were unable to recover it. 00:38:06.881 [2024-12-13 03:49:08.009326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.881 [2024-12-13 03:49:08.009367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.881 qpair failed and we were unable to recover it. 00:38:06.881 [2024-12-13 03:49:08.009667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.881 [2024-12-13 03:49:08.009709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.881 qpair failed and we were unable to recover it. 00:38:06.881 [2024-12-13 03:49:08.009849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.881 [2024-12-13 03:49:08.009888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.881 qpair failed and we were unable to recover it. 00:38:06.881 [2024-12-13 03:49:08.009970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:06.881 [2024-12-13 03:49:08.009984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:06.881 qpair failed and we were unable to recover it. 00:38:07.167 [2024-12-13 03:49:08.010151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.167 [2024-12-13 03:49:08.010165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.167 qpair failed and we were unable to recover it. 00:38:07.167 [2024-12-13 03:49:08.010378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.167 [2024-12-13 03:49:08.010420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.167 qpair failed and we were unable to recover it. 00:38:07.167 [2024-12-13 03:49:08.010637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.167 [2024-12-13 03:49:08.010679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.167 qpair failed and we were unable to recover it. 00:38:07.167 [2024-12-13 03:49:08.010802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.167 [2024-12-13 03:49:08.010843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.011119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.011133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.011209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.011221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.011377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.011389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.011535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.011549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.011625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.011639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.011794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.011835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.012044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.012087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.012290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.012333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.012536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.012578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.012790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.012830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.013052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.013066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.013220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.013234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.013369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.013381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.013607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.013621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.013711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.013725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.013820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.013833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.013924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.013938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.014024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.014038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.014193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.014206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.014342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.014355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.014440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.014454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.014618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.014631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.014729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.014742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.014809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.014823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.015072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.015115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.015327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.015370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.015563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.015612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.015760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.015801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.016012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.016056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.016317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.016360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.016548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.016589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.016732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.016774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.016992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.017035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.017255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.017297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.017546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.017587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.017737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.017779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.018041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.018083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.018254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.018268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.018479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.018520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.018813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.018855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.019103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.019116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.019285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.019298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.019378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.019392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.019597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.019610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.019744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.019757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.019835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.019848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.019929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.019943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.020091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.020107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.020191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.020209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.020385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.020398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.020542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.020555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.168 [2024-12-13 03:49:08.020632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.168 [2024-12-13 03:49:08.020645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.168 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.020791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.020804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.020882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.020895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.021039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.021053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.021215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.021269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.021402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.021443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.021636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.021676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.021813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.021826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.021902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.021915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.021985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.021998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.022170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.022184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.022359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.022400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.022539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.022581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.022781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.022823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.022977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.022992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.023079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.023095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.023176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.023189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.023424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.023471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.023612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.023651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.023794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.023835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.024049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.024063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.024134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.024146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.024362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.024375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.024457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.024469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.024610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.024623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.024852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.024865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.024948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.024963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.025054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.025067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.025242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.025255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.025324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.025337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.025479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.025492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.025635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.025649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.025740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.025753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.025922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.025936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.026115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.026158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.026301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.026341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.026480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.026523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.026795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.026837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.027126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.027170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.169 qpair failed and we were unable to recover it. 00:38:07.169 [2024-12-13 03:49:08.027373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.169 [2024-12-13 03:49:08.027414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.027714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.027756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.027928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.027941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.028018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.028031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.028121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.028135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.028368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.028381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.028471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.028484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.028637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.028651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.028720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.028731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.028968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.028981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.029077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.029091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.029293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.029306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.029473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.029487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.029664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.029677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.029823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.029836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.029997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.030011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.030096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.030112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.030210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.030223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.030369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.030383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.030520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.030561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.030752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.030796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.030978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.031018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.031219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.031237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.031444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.031457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.031611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.031624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.031758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.031772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.031905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.031930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.032100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.032115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.032269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.032283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.032442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.032456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.032555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.032569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.032643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.032656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.032751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.032764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.032915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.032935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.033070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.033083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.033232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.033246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.033331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.033343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.033427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.033440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.033677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.170 [2024-12-13 03:49:08.033690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.170 qpair failed and we were unable to recover it. 00:38:07.170 [2024-12-13 03:49:08.033901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.171 [2024-12-13 03:49:08.033914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.171 qpair failed and we were unable to recover it. 00:38:07.171 [2024-12-13 03:49:08.034074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.171 [2024-12-13 03:49:08.034087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.171 qpair failed and we were unable to recover it. 00:38:07.171 [2024-12-13 03:49:08.034241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.171 [2024-12-13 03:49:08.034280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.171 qpair failed and we were unable to recover it. 00:38:07.171 [2024-12-13 03:49:08.034442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.171 [2024-12-13 03:49:08.034484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.171 qpair failed and we were unable to recover it. 00:38:07.171 [2024-12-13 03:49:08.034691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.171 [2024-12-13 03:49:08.034732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.171 qpair failed and we were unable to recover it. 00:38:07.171 [2024-12-13 03:49:08.034944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.171 [2024-12-13 03:49:08.034958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.171 qpair failed and we were unable to recover it. 00:38:07.171 [2024-12-13 03:49:08.035049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.171 [2024-12-13 03:49:08.035063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.171 qpair failed and we were unable to recover it. 00:38:07.171 [2024-12-13 03:49:08.035151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.171 [2024-12-13 03:49:08.035164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.171 qpair failed and we were unable to recover it. 00:38:07.171 [2024-12-13 03:49:08.035316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.171 [2024-12-13 03:49:08.035330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.171 qpair failed and we were unable to recover it. 00:38:07.171 [2024-12-13 03:49:08.035543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.171 [2024-12-13 03:49:08.035556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.171 qpair failed and we were unable to recover it. 00:38:07.171 [2024-12-13 03:49:08.035703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.171 [2024-12-13 03:49:08.035727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.171 qpair failed and we were unable to recover it. 00:38:07.171 [2024-12-13 03:49:08.035875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.171 [2024-12-13 03:49:08.035887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.171 qpair failed and we were unable to recover it. 00:38:07.171 [2024-12-13 03:49:08.035985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.171 [2024-12-13 03:49:08.035997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.171 qpair failed and we were unable to recover it. 00:38:07.171 [2024-12-13 03:49:08.036083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.171 [2024-12-13 03:49:08.036095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.171 qpair failed and we were unable to recover it. 00:38:07.171 [2024-12-13 03:49:08.036193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.171 [2024-12-13 03:49:08.036205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.171 qpair failed and we were unable to recover it. 00:38:07.171 [2024-12-13 03:49:08.036286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.171 [2024-12-13 03:49:08.036298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.171 qpair failed and we were unable to recover it. 00:38:07.171 [2024-12-13 03:49:08.036384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.171 [2024-12-13 03:49:08.036397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.171 qpair failed and we were unable to recover it. 00:38:07.171 [2024-12-13 03:49:08.036547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.171 [2024-12-13 03:49:08.036564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.171 qpair failed and we were unable to recover it. 00:38:07.171 [2024-12-13 03:49:08.036735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.171 [2024-12-13 03:49:08.036748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.171 qpair failed and we were unable to recover it. 00:38:07.171 [2024-12-13 03:49:08.036825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.171 [2024-12-13 03:49:08.036837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.171 qpair failed and we were unable to recover it. 00:38:07.171 [2024-12-13 03:49:08.036985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.171 [2024-12-13 03:49:08.036999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.171 qpair failed and we were unable to recover it. 00:38:07.171 [2024-12-13 03:49:08.037070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.171 [2024-12-13 03:49:08.037083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.171 qpair failed and we were unable to recover it. 00:38:07.171 [2024-12-13 03:49:08.037155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.171 [2024-12-13 03:49:08.037168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.171 qpair failed and we were unable to recover it. 00:38:07.171 [2024-12-13 03:49:08.037311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.171 [2024-12-13 03:49:08.037323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.171 qpair failed and we were unable to recover it. 00:38:07.171 [2024-12-13 03:49:08.037458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.171 [2024-12-13 03:49:08.037472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.171 qpair failed and we were unable to recover it. 00:38:07.171 [2024-12-13 03:49:08.037556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.171 [2024-12-13 03:49:08.037568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.171 qpair failed and we were unable to recover it. 00:38:07.171 [2024-12-13 03:49:08.037643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.171 [2024-12-13 03:49:08.037655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.171 qpair failed and we were unable to recover it. 00:38:07.171 [2024-12-13 03:49:08.037809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.171 [2024-12-13 03:49:08.037822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.171 qpair failed and we were unable to recover it. 00:38:07.171 [2024-12-13 03:49:08.037972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.171 [2024-12-13 03:49:08.037986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.171 qpair failed and we were unable to recover it. 00:38:07.171 [2024-12-13 03:49:08.038130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.171 [2024-12-13 03:49:08.038144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.171 qpair failed and we were unable to recover it. 00:38:07.171 [2024-12-13 03:49:08.038290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.171 [2024-12-13 03:49:08.038303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.171 qpair failed and we were unable to recover it. 00:38:07.171 [2024-12-13 03:49:08.038393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.171 [2024-12-13 03:49:08.038405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.171 qpair failed and we were unable to recover it. 00:38:07.171 [2024-12-13 03:49:08.038485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.171 [2024-12-13 03:49:08.038498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.171 qpair failed and we were unable to recover it. 00:38:07.171 [2024-12-13 03:49:08.038635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.171 [2024-12-13 03:49:08.038648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.171 qpair failed and we were unable to recover it. 00:38:07.171 [2024-12-13 03:49:08.038732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.171 [2024-12-13 03:49:08.038744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.171 qpair failed and we were unable to recover it. 00:38:07.171 [2024-12-13 03:49:08.038875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.171 [2024-12-13 03:49:08.038887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.171 qpair failed and we were unable to recover it. 00:38:07.172 [2024-12-13 03:49:08.038994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.172 [2024-12-13 03:49:08.039007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.172 qpair failed and we were unable to recover it. 00:38:07.172 [2024-12-13 03:49:08.039071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.172 [2024-12-13 03:49:08.039083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.172 qpair failed and we were unable to recover it. 00:38:07.172 [2024-12-13 03:49:08.039173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.172 [2024-12-13 03:49:08.039187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.172 qpair failed and we were unable to recover it. 00:38:07.172 [2024-12-13 03:49:08.039250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.172 [2024-12-13 03:49:08.039262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.172 qpair failed and we were unable to recover it. 00:38:07.172 [2024-12-13 03:49:08.039396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.172 [2024-12-13 03:49:08.039409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.172 qpair failed and we were unable to recover it. 00:38:07.172 [2024-12-13 03:49:08.039474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.172 [2024-12-13 03:49:08.039485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.172 qpair failed and we were unable to recover it. 00:38:07.172 [2024-12-13 03:49:08.039569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.172 [2024-12-13 03:49:08.039582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.172 qpair failed and we were unable to recover it. 00:38:07.172 [2024-12-13 03:49:08.039783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.172 [2024-12-13 03:49:08.039798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.172 qpair failed and we were unable to recover it. 00:38:07.172 [2024-12-13 03:49:08.039952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.172 [2024-12-13 03:49:08.040000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.172 qpair failed and we were unable to recover it. 00:38:07.172 [2024-12-13 03:49:08.040132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.172 [2024-12-13 03:49:08.040172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.172 qpair failed and we were unable to recover it. 00:38:07.172 [2024-12-13 03:49:08.040384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.172 [2024-12-13 03:49:08.040428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.172 qpair failed and we were unable to recover it. 00:38:07.172 [2024-12-13 03:49:08.040585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.172 [2024-12-13 03:49:08.040639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.172 qpair failed and we were unable to recover it. 00:38:07.172 [2024-12-13 03:49:08.040766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.172 [2024-12-13 03:49:08.040808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.172 qpair failed and we were unable to recover it. 00:38:07.172 [2024-12-13 03:49:08.041039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.172 [2024-12-13 03:49:08.041082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.172 qpair failed and we were unable to recover it. 00:38:07.172 [2024-12-13 03:49:08.041264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.172 [2024-12-13 03:49:08.041278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.172 qpair failed and we were unable to recover it. 00:38:07.172 [2024-12-13 03:49:08.041411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.172 [2024-12-13 03:49:08.041425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.172 qpair failed and we were unable to recover it. 00:38:07.172 [2024-12-13 03:49:08.041504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.172 [2024-12-13 03:49:08.041516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.172 qpair failed and we were unable to recover it. 00:38:07.172 [2024-12-13 03:49:08.041668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.172 [2024-12-13 03:49:08.041681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.172 qpair failed and we were unable to recover it. 00:38:07.172 [2024-12-13 03:49:08.041819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.172 [2024-12-13 03:49:08.041833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.172 qpair failed and we were unable to recover it. 00:38:07.172 [2024-12-13 03:49:08.041980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.172 [2024-12-13 03:49:08.041994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.172 qpair failed and we were unable to recover it. 00:38:07.172 [2024-12-13 03:49:08.042169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.172 [2024-12-13 03:49:08.042183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.172 qpair failed and we were unable to recover it. 00:38:07.172 [2024-12-13 03:49:08.042272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.172 [2024-12-13 03:49:08.042287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.172 qpair failed and we were unable to recover it. 00:38:07.172 [2024-12-13 03:49:08.042367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.172 [2024-12-13 03:49:08.042379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.172 qpair failed and we were unable to recover it. 00:38:07.172 [2024-12-13 03:49:08.042459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.172 [2024-12-13 03:49:08.042471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.172 qpair failed and we were unable to recover it. 00:38:07.172 [2024-12-13 03:49:08.042554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.172 [2024-12-13 03:49:08.042567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.172 qpair failed and we were unable to recover it. 00:38:07.172 [2024-12-13 03:49:08.042645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.172 [2024-12-13 03:49:08.042666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.172 qpair failed and we were unable to recover it. 00:38:07.172 [2024-12-13 03:49:08.042816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.172 [2024-12-13 03:49:08.042829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.172 qpair failed and we were unable to recover it. 00:38:07.172 [2024-12-13 03:49:08.042912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.172 [2024-12-13 03:49:08.042931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.172 qpair failed and we were unable to recover it. 00:38:07.172 [2024-12-13 03:49:08.043073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.172 [2024-12-13 03:49:08.043085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.172 qpair failed and we were unable to recover it. 00:38:07.172 [2024-12-13 03:49:08.043248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.172 [2024-12-13 03:49:08.043261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.172 qpair failed and we were unable to recover it. 00:38:07.172 [2024-12-13 03:49:08.043430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.172 [2024-12-13 03:49:08.043443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.172 qpair failed and we were unable to recover it. 00:38:07.172 [2024-12-13 03:49:08.043581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.172 [2024-12-13 03:49:08.043594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.172 qpair failed and we were unable to recover it. 00:38:07.172 [2024-12-13 03:49:08.043684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.172 [2024-12-13 03:49:08.043696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.172 qpair failed and we were unable to recover it. 00:38:07.172 [2024-12-13 03:49:08.043782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.172 [2024-12-13 03:49:08.043794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.172 qpair failed and we were unable to recover it. 00:38:07.172 [2024-12-13 03:49:08.043893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.172 [2024-12-13 03:49:08.043906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.172 qpair failed and we were unable to recover it. 00:38:07.172 [2024-12-13 03:49:08.044009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.172 [2024-12-13 03:49:08.044023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.172 qpair failed and we were unable to recover it. 00:38:07.172 [2024-12-13 03:49:08.044169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.044182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.044320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.044334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.044487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.044501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.044643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.044656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.044754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.044766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.044861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.044874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.045018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.045032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.045230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.045243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.045326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.045339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.045406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.045418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.045551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.045564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.045704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.045718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.045878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.045892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.045984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.045998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.046086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.046099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.046229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.046242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.046377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.046391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.046461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.046474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.046622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.046636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.046801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.046815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.046884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.046897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.046971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.046984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.047079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.047093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.047307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.047322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.047403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.047417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.047551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.047567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.047643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.047656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.047792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.047805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.047886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.047899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.047982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.047994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.048240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.048254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.048338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.048351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.048431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.048445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.048518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.048530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.048594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.048606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.048672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.048685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.048858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.048872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.048952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.048965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.049138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.173 [2024-12-13 03:49:08.049151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.173 qpair failed and we were unable to recover it. 00:38:07.173 [2024-12-13 03:49:08.049372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.049426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.049644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.049687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.049898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.049967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.050177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.050218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.050415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.050455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.050618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.050659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.050811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.050853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.051013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.051054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.051233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.051247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.051395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.051447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.051571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.051614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.051870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.051910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.052070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.052110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.052341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.052386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.052520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.052566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.052777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.052861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.053111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.053157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.053372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.053414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.053623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.053665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.053886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.053901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.054055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.054068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.054261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.054302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.054496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.054537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.054828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.054871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.055085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.055127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.055338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.055379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.055581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.055633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.055831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.055872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.056158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.056186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.056358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.056380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.056549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.056570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.056683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.056698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.056856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.056869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.057044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.057058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.057193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.057208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.057290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.057303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.057449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.057462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.057602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.174 [2024-12-13 03:49:08.057615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.174 qpair failed and we were unable to recover it. 00:38:07.174 [2024-12-13 03:49:08.057765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.057779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.057856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.057872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.058010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.058027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.058188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.058202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.058277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.058289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.058502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.058516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.058600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.058613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.058681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.058693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.058844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.058858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.059012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.059027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.059094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.059106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.059187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.059199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.059404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.059419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.059564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.059577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.059720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.059733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.059822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.059850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.060017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.060073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.060287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.060334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.060602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.060645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.060780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.060802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.060970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.060992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.061233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.061250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.061333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.061344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.061571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.061584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.061723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.061737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.061818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.061832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.061900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.061913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.062077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.062090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.062243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.062258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.062412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.062426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.062573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.062587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.062688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.062701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.062788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.062799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.062938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.062951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.063201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.063214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.063348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.063363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.063503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.063516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.063624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.063640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.063722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.175 [2024-12-13 03:49:08.063738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.175 qpair failed and we were unable to recover it. 00:38:07.175 [2024-12-13 03:49:08.063812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-12-13 03:49:08.063825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.176 qpair failed and we were unable to recover it. 00:38:07.176 [2024-12-13 03:49:08.063976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-12-13 03:49:08.063994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.176 qpair failed and we were unable to recover it. 00:38:07.176 [2024-12-13 03:49:08.064084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-12-13 03:49:08.064098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.176 qpair failed and we were unable to recover it. 00:38:07.176 [2024-12-13 03:49:08.064247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-12-13 03:49:08.064260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.176 qpair failed and we were unable to recover it. 00:38:07.176 [2024-12-13 03:49:08.064339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-12-13 03:49:08.064351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.176 qpair failed and we were unable to recover it. 00:38:07.176 [2024-12-13 03:49:08.064553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-12-13 03:49:08.064566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.176 qpair failed and we were unable to recover it. 00:38:07.176 [2024-12-13 03:49:08.064641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-12-13 03:49:08.064654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.176 qpair failed and we were unable to recover it. 00:38:07.176 [2024-12-13 03:49:08.064784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-12-13 03:49:08.064797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.176 qpair failed and we were unable to recover it. 00:38:07.176 [2024-12-13 03:49:08.064931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-12-13 03:49:08.064945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.176 qpair failed and we were unable to recover it. 00:38:07.176 [2024-12-13 03:49:08.065027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-12-13 03:49:08.065039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.176 qpair failed and we were unable to recover it. 00:38:07.176 [2024-12-13 03:49:08.065174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-12-13 03:49:08.065188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.176 qpair failed and we were unable to recover it. 00:38:07.176 [2024-12-13 03:49:08.065323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-12-13 03:49:08.065337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.176 qpair failed and we were unable to recover it. 00:38:07.176 [2024-12-13 03:49:08.065419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-12-13 03:49:08.065432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.176 qpair failed and we were unable to recover it. 00:38:07.176 [2024-12-13 03:49:08.065529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-12-13 03:49:08.065541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.176 qpair failed and we were unable to recover it. 00:38:07.176 [2024-12-13 03:49:08.065622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-12-13 03:49:08.065634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.176 qpair failed and we were unable to recover it. 00:38:07.176 [2024-12-13 03:49:08.065781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-12-13 03:49:08.065795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.176 qpair failed and we were unable to recover it. 00:38:07.176 [2024-12-13 03:49:08.065957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-12-13 03:49:08.065983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.176 qpair failed and we were unable to recover it. 00:38:07.176 [2024-12-13 03:49:08.066089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-12-13 03:49:08.066118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.176 qpair failed and we were unable to recover it. 00:38:07.176 [2024-12-13 03:49:08.066232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-12-13 03:49:08.066257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.176 qpair failed and we were unable to recover it. 00:38:07.176 [2024-12-13 03:49:08.066350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-12-13 03:49:08.066365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.176 qpair failed and we were unable to recover it. 00:38:07.176 [2024-12-13 03:49:08.066615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-12-13 03:49:08.066629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.176 qpair failed and we were unable to recover it. 00:38:07.176 [2024-12-13 03:49:08.066696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-12-13 03:49:08.066709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.176 qpair failed and we were unable to recover it. 00:38:07.176 [2024-12-13 03:49:08.066857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-12-13 03:49:08.066870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.176 qpair failed and we were unable to recover it. 00:38:07.176 [2024-12-13 03:49:08.066950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-12-13 03:49:08.066962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.176 qpair failed and we were unable to recover it. 00:38:07.176 [2024-12-13 03:49:08.067123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-12-13 03:49:08.067137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.176 qpair failed and we were unable to recover it. 00:38:07.176 [2024-12-13 03:49:08.067222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-12-13 03:49:08.067236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.176 qpair failed and we were unable to recover it. 00:38:07.176 [2024-12-13 03:49:08.067373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-12-13 03:49:08.067386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.176 qpair failed and we were unable to recover it. 00:38:07.176 [2024-12-13 03:49:08.067590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-12-13 03:49:08.067631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.176 qpair failed and we were unable to recover it. 00:38:07.176 [2024-12-13 03:49:08.067825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-12-13 03:49:08.067866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.176 qpair failed and we were unable to recover it. 00:38:07.176 [2024-12-13 03:49:08.068036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-12-13 03:49:08.068086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.176 qpair failed and we were unable to recover it. 00:38:07.176 [2024-12-13 03:49:08.068201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-12-13 03:49:08.068215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.176 qpair failed and we were unable to recover it. 00:38:07.176 [2024-12-13 03:49:08.068425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-12-13 03:49:08.068438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.176 qpair failed and we were unable to recover it. 00:38:07.176 [2024-12-13 03:49:08.068576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-12-13 03:49:08.068589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.176 qpair failed and we were unable to recover it. 00:38:07.176 [2024-12-13 03:49:08.068673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-12-13 03:49:08.068685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.176 qpair failed and we were unable to recover it. 00:38:07.176 [2024-12-13 03:49:08.068779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.176 [2024-12-13 03:49:08.068792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.176 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.068932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.068945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.069034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.069046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.069252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.069267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.069413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.069465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.069679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.069721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.069858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.069900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.070056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.070069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.070138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.070150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.070280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.070294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.070386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.070401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.070653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.070666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.070768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.070781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.070858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.070870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.071010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.071023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.071094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.071107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.071310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.071323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.071479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.071493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.071695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.071708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.071789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.071801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.071900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.071914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.072068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.072082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.072276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.072324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.072488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.072533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.072684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.072727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.072916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.072944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.073100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.073114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.073280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.073293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.073377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.073390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.073527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.073539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.073704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.073718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.073878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.073891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.073972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.073984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.074138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.074151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.074290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.074305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.074400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.074418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.074518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.074531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.074612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.074626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.074702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.177 [2024-12-13 03:49:08.074714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.177 qpair failed and we were unable to recover it. 00:38:07.177 [2024-12-13 03:49:08.074892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.074947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.075146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.075188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.075428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.075481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.075647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.075691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.075901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.075955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.076148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.076168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.076335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.076357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.076536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.076580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.076791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.076832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.077112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.077156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.077314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.077358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.077566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.077607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.077750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.077770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.078030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.078051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.078236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.078279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.078516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.078558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.078767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.078809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.079022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.079065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.079279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.079301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.079479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.079520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.079658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.079700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.079906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.079958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.080249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.080292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.080524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.080578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.080786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.080826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.080962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.081004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.081217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.081259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.081451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.081492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.081691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.081731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.081951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.081973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.082070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.082090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.082345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.082365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.082511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.082528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.082636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.082650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.082794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.082808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.082893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.082906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.083064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.083079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.083236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.083249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.083484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.178 [2024-12-13 03:49:08.083498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.178 qpair failed and we were unable to recover it. 00:38:07.178 [2024-12-13 03:49:08.083588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.083602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.083845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.083859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.084030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.084044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.084188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.084203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.084289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.084303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.084524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.084538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.084710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.084754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.085029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.085072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.085219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.085260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.085456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.085497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.085633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.085674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.085904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.085949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.086150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.086164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.086246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.086258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.086396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.086409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.086491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.086504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.086702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.086715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.087010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.087055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.087316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.087359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.087574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.087618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.087849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.087901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.088236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.088279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.088444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.088485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.088683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.088725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.088997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.089050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.089249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.089289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.089509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.089552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.089833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.089874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.090092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.090136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.090317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.090331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.090486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.090532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.090767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.090811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.090953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.091003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.091200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.091214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.091309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.091321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.091510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.091524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.091759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.091773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.091874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.091889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.092123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.179 [2024-12-13 03:49:08.092139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.179 qpair failed and we were unable to recover it. 00:38:07.179 [2024-12-13 03:49:08.092279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.180 [2024-12-13 03:49:08.092293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.180 qpair failed and we were unable to recover it. 00:38:07.180 [2024-12-13 03:49:08.092428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.180 [2024-12-13 03:49:08.092442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.180 qpair failed and we were unable to recover it. 00:38:07.180 [2024-12-13 03:49:08.092518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.180 [2024-12-13 03:49:08.092531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.180 qpair failed and we were unable to recover it. 00:38:07.180 [2024-12-13 03:49:08.092610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.180 [2024-12-13 03:49:08.092623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.180 qpair failed and we were unable to recover it. 00:38:07.180 [2024-12-13 03:49:08.092852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.180 [2024-12-13 03:49:08.092865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.180 qpair failed and we were unable to recover it. 00:38:07.180 [2024-12-13 03:49:08.093008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.180 [2024-12-13 03:49:08.093023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.180 qpair failed and we were unable to recover it. 00:38:07.180 [2024-12-13 03:49:08.093126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.180 [2024-12-13 03:49:08.093145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.180 qpair failed and we were unable to recover it. 00:38:07.180 [2024-12-13 03:49:08.093281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.180 [2024-12-13 03:49:08.093294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.180 qpair failed and we were unable to recover it. 00:38:07.180 [2024-12-13 03:49:08.093379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.180 [2024-12-13 03:49:08.093391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.180 qpair failed and we were unable to recover it. 00:38:07.180 [2024-12-13 03:49:08.093524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.180 [2024-12-13 03:49:08.093538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.180 qpair failed and we were unable to recover it. 00:38:07.180 [2024-12-13 03:49:08.093618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.180 [2024-12-13 03:49:08.093630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.180 qpair failed and we were unable to recover it. 00:38:07.180 [2024-12-13 03:49:08.093718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.180 [2024-12-13 03:49:08.093732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.180 qpair failed and we were unable to recover it. 00:38:07.180 [2024-12-13 03:49:08.093834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.180 [2024-12-13 03:49:08.093852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.180 qpair failed and we were unable to recover it. 00:38:07.180 [2024-12-13 03:49:08.094055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.180 [2024-12-13 03:49:08.094069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.180 qpair failed and we were unable to recover it. 00:38:07.180 [2024-12-13 03:49:08.094137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.180 [2024-12-13 03:49:08.094149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.180 qpair failed and we were unable to recover it. 00:38:07.180 [2024-12-13 03:49:08.094227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.180 [2024-12-13 03:49:08.094239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.180 qpair failed and we were unable to recover it. 00:38:07.180 [2024-12-13 03:49:08.094376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.180 [2024-12-13 03:49:08.094389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.180 qpair failed and we were unable to recover it. 00:38:07.180 [2024-12-13 03:49:08.094466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.180 [2024-12-13 03:49:08.094479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.180 qpair failed and we were unable to recover it. 00:38:07.180 [2024-12-13 03:49:08.094567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.180 [2024-12-13 03:49:08.094579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.180 qpair failed and we were unable to recover it. 00:38:07.180 [2024-12-13 03:49:08.094658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.180 [2024-12-13 03:49:08.094671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.180 qpair failed and we were unable to recover it. 00:38:07.180 [2024-12-13 03:49:08.094879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.180 [2024-12-13 03:49:08.094894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.180 qpair failed and we were unable to recover it. 00:38:07.180 [2024-12-13 03:49:08.094986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.180 [2024-12-13 03:49:08.094998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.180 qpair failed and we were unable to recover it. 00:38:07.180 [2024-12-13 03:49:08.095080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.180 [2024-12-13 03:49:08.095093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.180 qpair failed and we were unable to recover it. 00:38:07.180 [2024-12-13 03:49:08.095240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.180 [2024-12-13 03:49:08.095253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.180 qpair failed and we were unable to recover it. 00:38:07.180 [2024-12-13 03:49:08.095341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.180 [2024-12-13 03:49:08.095354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.180 qpair failed and we were unable to recover it. 00:38:07.180 [2024-12-13 03:49:08.095557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.180 [2024-12-13 03:49:08.095573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.180 qpair failed and we were unable to recover it. 00:38:07.180 [2024-12-13 03:49:08.095666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.180 [2024-12-13 03:49:08.095679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.180 qpair failed and we were unable to recover it. 00:38:07.180 [2024-12-13 03:49:08.095753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.180 [2024-12-13 03:49:08.095765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.180 qpair failed and we were unable to recover it. 00:38:07.180 [2024-12-13 03:49:08.095909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.180 [2024-12-13 03:49:08.095932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.180 qpair failed and we were unable to recover it. 00:38:07.180 [2024-12-13 03:49:08.096197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.180 [2024-12-13 03:49:08.096239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.180 qpair failed and we were unable to recover it. 00:38:07.180 [2024-12-13 03:49:08.096438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.180 [2024-12-13 03:49:08.096479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.180 qpair failed and we were unable to recover it. 00:38:07.180 [2024-12-13 03:49:08.096628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.180 [2024-12-13 03:49:08.096671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.180 qpair failed and we were unable to recover it. 00:38:07.180 [2024-12-13 03:49:08.096806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.180 [2024-12-13 03:49:08.096849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.180 qpair failed and we were unable to recover it. 00:38:07.180 [2024-12-13 03:49:08.096962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.180 [2024-12-13 03:49:08.096977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.180 qpair failed and we were unable to recover it. 00:38:07.180 [2024-12-13 03:49:08.097138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.180 [2024-12-13 03:49:08.097152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.180 qpair failed and we were unable to recover it. 00:38:07.180 [2024-12-13 03:49:08.097297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.097311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.097405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.097418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.097506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.097519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.097589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.097602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.097704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.097717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.097798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.097810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.097953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.097966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.098110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.098124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.098267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.098281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.098484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.098498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.098761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.098803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.099007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.099021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.099157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.099170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.099253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.099265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.099349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.099362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.099508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.099522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.099620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.099632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.099723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.099737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.099943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.099957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.100121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.100136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.100209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.100221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.100293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.100306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.100396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.100409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.100547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.100560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.100644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.100657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.100752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.100764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.100929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.100943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.101103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.101118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.101329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.101372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.101496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.101537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.101751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.101799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.102086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.102130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.102388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.102402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.102643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.102656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.102757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.102771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.102914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.102932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.103156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.103169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.103322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.103335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.103423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.181 [2024-12-13 03:49:08.103435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.181 qpair failed and we were unable to recover it. 00:38:07.181 [2024-12-13 03:49:08.103503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.103515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.103583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.103595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.103686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.103716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.103867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.103881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.103966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.103979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.104125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.104137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.104221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.104232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.104380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.104421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.104609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.104649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.104782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.104822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.105078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.105093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.105174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.105189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.105259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.105272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.105395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.105440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.105645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.105731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.106092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.106179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.106357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.106373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.106472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.106486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.106717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.106771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.106923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.106938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.107054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.107069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.107155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.107168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.107321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.107363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.107495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.107538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.107748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.107789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.107938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.107984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.108184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.108199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.108267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.108280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.108362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.108375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.108617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.108659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.108916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.108969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.109122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.109139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.109350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.109392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.109548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.109590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.109891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.109951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.110211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.110265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.110547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.110633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.110796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.110840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.111049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.182 [2024-12-13 03:49:08.111064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.182 qpair failed and we were unable to recover it. 00:38:07.182 [2024-12-13 03:49:08.111154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.111167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.111316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.111366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.111578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.111622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.111831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.111876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.111965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.111979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.112057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.112070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.112178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.112221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.112429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.112473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.112734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.112776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.112971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.113014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.113224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.113280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.113515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.113531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.113686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.113701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.113850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.113892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.114115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.114158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.114434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.114477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.114666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.114707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.114907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.114963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.115182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.115225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.115443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.115485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.115762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.115804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.116014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.116059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.116187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.116229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.116489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.116530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.116741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.116783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.116999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.117014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.117285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.117327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.117518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.117559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.117752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.117796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.117980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.117995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.118156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.118199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.118406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.118449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.118665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.118715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.118990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.119047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.119227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.119242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.119450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.119492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.119777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.119820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.119960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.119975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.183 [2024-12-13 03:49:08.120058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.183 [2024-12-13 03:49:08.120071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.183 qpair failed and we were unable to recover it. 00:38:07.184 [2024-12-13 03:49:08.120143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.184 [2024-12-13 03:49:08.120172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.184 qpair failed and we were unable to recover it. 00:38:07.184 [2024-12-13 03:49:08.120300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.184 [2024-12-13 03:49:08.120343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.184 qpair failed and we were unable to recover it. 00:38:07.184 [2024-12-13 03:49:08.120535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.184 [2024-12-13 03:49:08.120577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.184 qpair failed and we were unable to recover it. 00:38:07.184 [2024-12-13 03:49:08.120732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.184 [2024-12-13 03:49:08.120775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.184 qpair failed and we were unable to recover it. 00:38:07.184 [2024-12-13 03:49:08.121038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.184 [2024-12-13 03:49:08.121083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.184 qpair failed and we were unable to recover it. 00:38:07.184 [2024-12-13 03:49:08.121270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.184 [2024-12-13 03:49:08.121285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.184 qpair failed and we were unable to recover it. 00:38:07.184 [2024-12-13 03:49:08.121372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.184 [2024-12-13 03:49:08.121385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.184 qpair failed and we were unable to recover it. 00:38:07.184 [2024-12-13 03:49:08.121640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.184 [2024-12-13 03:49:08.121684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.184 qpair failed and we were unable to recover it. 00:38:07.184 [2024-12-13 03:49:08.121878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.184 [2024-12-13 03:49:08.121931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.184 qpair failed and we were unable to recover it. 00:38:07.184 [2024-12-13 03:49:08.122201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.184 [2024-12-13 03:49:08.122216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.184 qpair failed and we were unable to recover it. 00:38:07.184 [2024-12-13 03:49:08.122376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.184 [2024-12-13 03:49:08.122417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.184 qpair failed and we were unable to recover it. 00:38:07.184 [2024-12-13 03:49:08.122672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.184 [2024-12-13 03:49:08.122713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.184 qpair failed and we were unable to recover it. 00:38:07.184 [2024-12-13 03:49:08.122866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.184 [2024-12-13 03:49:08.122909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.184 qpair failed and we were unable to recover it. 00:38:07.184 [2024-12-13 03:49:08.123180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.184 [2024-12-13 03:49:08.123222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.184 qpair failed and we were unable to recover it. 00:38:07.184 [2024-12-13 03:49:08.123427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.184 [2024-12-13 03:49:08.123442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.184 qpair failed and we were unable to recover it. 00:38:07.184 [2024-12-13 03:49:08.123514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.184 [2024-12-13 03:49:08.123528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.184 qpair failed and we were unable to recover it. 00:38:07.184 [2024-12-13 03:49:08.123715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.184 [2024-12-13 03:49:08.123758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.184 qpair failed and we were unable to recover it. 00:38:07.184 [2024-12-13 03:49:08.123975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.184 [2024-12-13 03:49:08.124021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.184 qpair failed and we were unable to recover it. 00:38:07.184 [2024-12-13 03:49:08.124280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.184 [2024-12-13 03:49:08.124323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.184 qpair failed and we were unable to recover it. 00:38:07.184 [2024-12-13 03:49:08.124466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.184 [2024-12-13 03:49:08.124508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.184 qpair failed and we were unable to recover it. 00:38:07.184 [2024-12-13 03:49:08.124737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.184 [2024-12-13 03:49:08.124780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.184 qpair failed and we were unable to recover it. 00:38:07.184 [2024-12-13 03:49:08.125054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.184 [2024-12-13 03:49:08.125070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.184 qpair failed and we were unable to recover it. 00:38:07.184 [2024-12-13 03:49:08.125210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.184 [2024-12-13 03:49:08.125225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.184 qpair failed and we were unable to recover it. 00:38:07.184 [2024-12-13 03:49:08.125330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.184 [2024-12-13 03:49:08.125345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.184 qpair failed and we were unable to recover it. 00:38:07.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2918031 Killed "${NVMF_APP[@]}" "$@" 00:38:07.184 [2024-12-13 03:49:08.125428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.184 [2024-12-13 03:49:08.125443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.184 qpair failed and we were unable to recover it. 00:38:07.184 [2024-12-13 03:49:08.125594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.184 [2024-12-13 03:49:08.125609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.184 qpair failed and we were unable to recover it. 00:38:07.184 [2024-12-13 03:49:08.125695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.184 [2024-12-13 03:49:08.125708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.184 qpair failed and we were unable to recover it. 00:38:07.184 [2024-12-13 03:49:08.125789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.184 [2024-12-13 03:49:08.125803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.184 qpair failed and we were unable to recover it. 00:38:07.184 [2024-12-13 03:49:08.125880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.184 [2024-12-13 03:49:08.125893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.184 qpair failed and we were unable to recover it. 00:38:07.184 [2024-12-13 03:49:08.126042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.184 [2024-12-13 03:49:08.126057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.184 qpair failed and we were unable to recover it. 00:38:07.184 [2024-12-13 03:49:08.126193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.184 [2024-12-13 03:49:08.126210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.184 qpair failed and we were unable to recover it. 00:38:07.184 03:49:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:38:07.184 [2024-12-13 03:49:08.126295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.184 [2024-12-13 03:49:08.126309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.184 qpair failed and we were unable to recover it. 00:38:07.184 [2024-12-13 03:49:08.126412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.184 [2024-12-13 03:49:08.126428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.184 qpair failed and we were unable to recover it. 00:38:07.184 [2024-12-13 03:49:08.126506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.184 [2024-12-13 03:49:08.126519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.184 qpair failed and we were unable to recover it. 00:38:07.184 [2024-12-13 03:49:08.126593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.184 [2024-12-13 03:49:08.126608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.184 qpair failed and we were unable to recover it. 00:38:07.184 03:49:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:38:07.184 [2024-12-13 03:49:08.126752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.184 [2024-12-13 03:49:08.126768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.184 qpair failed and we were unable to recover it. 00:38:07.184 [2024-12-13 03:49:08.126932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.184 [2024-12-13 03:49:08.126949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.184 03:49:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:07.184 qpair failed and we were unable to recover it. 00:38:07.184 [2024-12-13 03:49:08.127086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.185 [2024-12-13 03:49:08.127101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.185 qpair failed and we were unable to recover it. 00:38:07.185 [2024-12-13 03:49:08.127232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.185 [2024-12-13 03:49:08.127248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.185 qpair failed and we were unable to recover it. 00:38:07.185 03:49:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:07.185 [2024-12-13 03:49:08.127449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.185 [2024-12-13 03:49:08.127464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.185 qpair failed and we were unable to recover it. 00:38:07.185 03:49:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:07.185 [2024-12-13 03:49:08.127553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.185 [2024-12-13 03:49:08.127568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.185 qpair failed and we were unable to recover it. 00:38:07.185 [2024-12-13 03:49:08.127638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.185 [2024-12-13 03:49:08.127652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.185 qpair failed and we were unable to recover it. 00:38:07.185 [2024-12-13 03:49:08.127862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.185 [2024-12-13 03:49:08.127903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.185 qpair failed and we were unable to recover it. 00:38:07.185 [2024-12-13 03:49:08.128057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.185 [2024-12-13 03:49:08.128100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.185 qpair failed and we were unable to recover it. 00:38:07.185 [2024-12-13 03:49:08.128321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.185 [2024-12-13 03:49:08.128409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.185 qpair failed and we were unable to recover it. 00:38:07.185 [2024-12-13 03:49:08.128687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.185 [2024-12-13 03:49:08.128774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.185 qpair failed and we were unable to recover it. 00:38:07.185 [2024-12-13 03:49:08.129101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.185 [2024-12-13 03:49:08.129148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.185 qpair failed and we were unable to recover it. 00:38:07.185 [2024-12-13 03:49:08.129347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.185 [2024-12-13 03:49:08.129396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.185 qpair failed and we were unable to recover it. 00:38:07.185 [2024-12-13 03:49:08.129604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.185 [2024-12-13 03:49:08.129649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.185 qpair failed and we were unable to recover it. 00:38:07.185 [2024-12-13 03:49:08.129804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.185 [2024-12-13 03:49:08.129847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.185 qpair failed and we were unable to recover it. 00:38:07.185 [2024-12-13 03:49:08.129996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.185 [2024-12-13 03:49:08.130041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.185 qpair failed and we were unable to recover it. 00:38:07.185 [2024-12-13 03:49:08.130309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.185 [2024-12-13 03:49:08.130353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.185 qpair failed and we were unable to recover it. 00:38:07.185 [2024-12-13 03:49:08.130560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.185 [2024-12-13 03:49:08.130603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.185 qpair failed and we were unable to recover it. 00:38:07.185 [2024-12-13 03:49:08.130862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.185 [2024-12-13 03:49:08.130905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.185 qpair failed and we were unable to recover it. 00:38:07.185 [2024-12-13 03:49:08.131086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.185 [2024-12-13 03:49:08.131131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.185 qpair failed and we were unable to recover it. 00:38:07.185 [2024-12-13 03:49:08.131269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.185 [2024-12-13 03:49:08.131323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.185 qpair failed and we were unable to recover it. 00:38:07.185 [2024-12-13 03:49:08.131511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.185 [2024-12-13 03:49:08.131534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.185 qpair failed and we were unable to recover it. 00:38:07.185 [2024-12-13 03:49:08.131687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.185 [2024-12-13 03:49:08.131715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.185 qpair failed and we were unable to recover it. 00:38:07.185 [2024-12-13 03:49:08.131948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.185 [2024-12-13 03:49:08.131994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.185 qpair failed and we were unable to recover it. 00:38:07.185 [2024-12-13 03:49:08.132190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.185 [2024-12-13 03:49:08.132233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.185 qpair failed and we were unable to recover it. 00:38:07.185 [2024-12-13 03:49:08.132537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.185 [2024-12-13 03:49:08.132580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.185 qpair failed and we were unable to recover it. 00:38:07.185 [2024-12-13 03:49:08.132710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.185 [2024-12-13 03:49:08.132753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.185 qpair failed and we were unable to recover it. 00:38:07.185 [2024-12-13 03:49:08.132971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.185 [2024-12-13 03:49:08.133015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.185 qpair failed and we were unable to recover it. 00:38:07.185 [2024-12-13 03:49:08.133236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.185 [2024-12-13 03:49:08.133281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.185 qpair failed and we were unable to recover it. 00:38:07.185 [2024-12-13 03:49:08.133564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.185 [2024-12-13 03:49:08.133610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.185 qpair failed and we were unable to recover it. 00:38:07.185 [2024-12-13 03:49:08.133819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.185 [2024-12-13 03:49:08.133863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.185 qpair failed and we were unable to recover it. 00:38:07.185 03:49:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2918881 00:38:07.185 [2024-12-13 03:49:08.134109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.185 [2024-12-13 03:49:08.134155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.185 qpair failed and we were unable to recover it. 00:38:07.185 [2024-12-13 03:49:08.134303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.185 [2024-12-13 03:49:08.134347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.185 qpair failed and we were unable to recover it. 00:38:07.185 03:49:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2918881 00:38:07.185 03:49:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:38:07.185 [2024-12-13 03:49:08.134570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.185 [2024-12-13 03:49:08.134614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.185 qpair failed and we were unable to recover it. 00:38:07.185 [2024-12-13 03:49:08.134763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.185 03:49:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2918881 ']' 00:38:07.186 [2024-12-13 03:49:08.134822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.186 qpair failed and we were unable to recover it. 00:38:07.186 [2024-12-13 03:49:08.135059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.186 [2024-12-13 03:49:08.135089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.186 qpair failed and we were unable to recover it. 00:38:07.186 03:49:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:07.186 [2024-12-13 03:49:08.135263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.186 [2024-12-13 03:49:08.135311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.186 qpair failed and we were unable to recover it. 00:38:07.186 03:49:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:07.186 [2024-12-13 03:49:08.135465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.186 [2024-12-13 03:49:08.135509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.186 qpair failed and we were unable to recover it. 00:38:07.186 03:49:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:07.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:07.186 [2024-12-13 03:49:08.135728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.186 [2024-12-13 03:49:08.135773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.186 qpair failed and we were unable to recover it. 00:38:07.186 [2024-12-13 03:49:08.135946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.186 [2024-12-13 03:49:08.135994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.186 03:49:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:07.186 qpair failed and we were unable to recover it. 00:38:07.186 [2024-12-13 03:49:08.136211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.186 [2024-12-13 03:49:08.136254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.186 qpair failed and we were unable to recover it. 00:38:07.186 03:49:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:07.186 [2024-12-13 03:49:08.136465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.186 [2024-12-13 03:49:08.136508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.186 qpair failed and we were unable to recover it. 00:38:07.186 [2024-12-13 03:49:08.136711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.186 [2024-12-13 03:49:08.136752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.186 qpair failed and we were unable to recover it. 00:38:07.186 [2024-12-13 03:49:08.136999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.186 [2024-12-13 03:49:08.137043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.186 qpair failed and we were unable to recover it. 00:38:07.186 [2024-12-13 03:49:08.137250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.186 [2024-12-13 03:49:08.137301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.186 qpair failed and we were unable to recover it. 00:38:07.186 [2024-12-13 03:49:08.138405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.186 [2024-12-13 03:49:08.138434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.186 qpair failed and we were unable to recover it. 00:38:07.186 [2024-12-13 03:49:08.138676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.186 [2024-12-13 03:49:08.138690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.186 qpair failed and we were unable to recover it. 00:38:07.186 [2024-12-13 03:49:08.138886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.186 [2024-12-13 03:49:08.138903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.186 qpair failed and we were unable to recover it. 00:38:07.186 [2024-12-13 03:49:08.139065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.186 [2024-12-13 03:49:08.139080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.186 qpair failed and we were unable to recover it. 00:38:07.186 [2024-12-13 03:49:08.139275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.186 [2024-12-13 03:49:08.139291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.186 qpair failed and we were unable to recover it. 00:38:07.186 [2024-12-13 03:49:08.139520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.186 [2024-12-13 03:49:08.139538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.186 qpair failed and we were unable to recover it. 00:38:07.186 [2024-12-13 03:49:08.139621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.186 [2024-12-13 03:49:08.139634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.186 qpair failed and we were unable to recover it. 00:38:07.186 [2024-12-13 03:49:08.139733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.186 [2024-12-13 03:49:08.139746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.186 qpair failed and we were unable to recover it. 00:38:07.186 [2024-12-13 03:49:08.139846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.186 [2024-12-13 03:49:08.139858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.186 qpair failed and we were unable to recover it. 00:38:07.186 [2024-12-13 03:49:08.140017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.186 [2024-12-13 03:49:08.140031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.186 qpair failed and we were unable to recover it. 00:38:07.186 [2024-12-13 03:49:08.140181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.186 [2024-12-13 03:49:08.140212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.186 qpair failed and we were unable to recover it. 00:38:07.186 [2024-12-13 03:49:08.140408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.186 [2024-12-13 03:49:08.140451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.186 qpair failed and we were unable to recover it. 00:38:07.186 [2024-12-13 03:49:08.140650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.186 [2024-12-13 03:49:08.140692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.186 qpair failed and we were unable to recover it. 00:38:07.186 [2024-12-13 03:49:08.140905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.186 [2024-12-13 03:49:08.140966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.186 qpair failed and we were unable to recover it. 00:38:07.186 [2024-12-13 03:49:08.141103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.186 [2024-12-13 03:49:08.141117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.186 qpair failed and we were unable to recover it. 00:38:07.186 [2024-12-13 03:49:08.141253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.186 [2024-12-13 03:49:08.141266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.186 qpair failed and we were unable to recover it. 00:38:07.186 [2024-12-13 03:49:08.141495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.186 [2024-12-13 03:49:08.141509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.186 qpair failed and we were unable to recover it. 00:38:07.186 [2024-12-13 03:49:08.141581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.186 [2024-12-13 03:49:08.141593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.186 qpair failed and we were unable to recover it. 00:38:07.186 [2024-12-13 03:49:08.141798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.186 [2024-12-13 03:49:08.141811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.186 qpair failed and we were unable to recover it. 00:38:07.186 [2024-12-13 03:49:08.141910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.186 [2024-12-13 03:49:08.141929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.186 qpair failed and we were unable to recover it. 00:38:07.186 [2024-12-13 03:49:08.142012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.186 [2024-12-13 03:49:08.142025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.186 qpair failed and we were unable to recover it. 00:38:07.186 [2024-12-13 03:49:08.142172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.186 [2024-12-13 03:49:08.142185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.186 qpair failed and we were unable to recover it. 00:38:07.186 [2024-12-13 03:49:08.142335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.186 [2024-12-13 03:49:08.142349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.186 qpair failed and we were unable to recover it. 00:38:07.186 [2024-12-13 03:49:08.142433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.186 [2024-12-13 03:49:08.142447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.186 qpair failed and we were unable to recover it. 00:38:07.186 [2024-12-13 03:49:08.142602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.186 [2024-12-13 03:49:08.142615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.186 qpair failed and we were unable to recover it. 00:38:07.186 [2024-12-13 03:49:08.142713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.186 [2024-12-13 03:49:08.142725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.142863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.142876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.143025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.143038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.143128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.143142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.143223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.143235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.143304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.143316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.143391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.143403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.143546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.143560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.143693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.143705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.143786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.143799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.143948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.143961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.144037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.144050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.144189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.144204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.144350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.144363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.144445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.144460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.144543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.144555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.144688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.144701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.144798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.144809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.144883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.144894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.144989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.145002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.145068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.145082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.145156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.145168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.145313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.145325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.145415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.145427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.145510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.145521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.145606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.145619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.145752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.145766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.145931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.145944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.146015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.146028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.146097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.146109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.146196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.146209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.146281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.146294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.146380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.146392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.146534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.146547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.146637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.146650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.146731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.146744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.146835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.146847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.146999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.147012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.147147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.187 [2024-12-13 03:49:08.147160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.187 qpair failed and we were unable to recover it. 00:38:07.187 [2024-12-13 03:49:08.147253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.147265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.147341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.147355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.147427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.147440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.147526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.147539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.147616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.147629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.147731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.147758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.147900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.147913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.148074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.148088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.148156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.148169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.148253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.148266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.148411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.148425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.148513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.148525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.148607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.148619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.148775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.148789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.148991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.149005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.149076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.149091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.149156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.149168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.149254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.149267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.149345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.149358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.149433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.149446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.149539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.149552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.149623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.149635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.149719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.149732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.149818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.149830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.149972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.149985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.150087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.150099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.150177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.150190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.150261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.150273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.150407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.150420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.150564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.150578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.150649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.150662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.150746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.150760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.150834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.150846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.150926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.150939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.151011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.151023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.151085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.151097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.151235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.151249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.151322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.151334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.151420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.188 [2024-12-13 03:49:08.151433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.188 qpair failed and we were unable to recover it. 00:38:07.188 [2024-12-13 03:49:08.151584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-12-13 03:49:08.151596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.189 qpair failed and we were unable to recover it. 00:38:07.189 [2024-12-13 03:49:08.151668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-12-13 03:49:08.151680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.189 qpair failed and we were unable to recover it. 00:38:07.189 [2024-12-13 03:49:08.151747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-12-13 03:49:08.151759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.189 qpair failed and we were unable to recover it. 00:38:07.189 [2024-12-13 03:49:08.151897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-12-13 03:49:08.151928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.189 qpair failed and we were unable to recover it. 00:38:07.189 [2024-12-13 03:49:08.152092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-12-13 03:49:08.152114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.189 qpair failed and we were unable to recover it. 00:38:07.189 [2024-12-13 03:49:08.152203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-12-13 03:49:08.152222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.189 qpair failed and we were unable to recover it. 00:38:07.189 [2024-12-13 03:49:08.152477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-12-13 03:49:08.152497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.189 qpair failed and we were unable to recover it. 00:38:07.189 [2024-12-13 03:49:08.152589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-12-13 03:49:08.152622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.189 qpair failed and we were unable to recover it. 00:38:07.189 [2024-12-13 03:49:08.152779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-12-13 03:49:08.152802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.189 qpair failed and we were unable to recover it. 00:38:07.189 [2024-12-13 03:49:08.153043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-12-13 03:49:08.153061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.189 qpair failed and we were unable to recover it. 00:38:07.189 [2024-12-13 03:49:08.153137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-12-13 03:49:08.153150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.189 qpair failed and we were unable to recover it. 00:38:07.189 [2024-12-13 03:49:08.153236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-12-13 03:49:08.153248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.189 qpair failed and we were unable to recover it. 00:38:07.189 [2024-12-13 03:49:08.153384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-12-13 03:49:08.153397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.189 qpair failed and we were unable to recover it. 00:38:07.189 [2024-12-13 03:49:08.153471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-12-13 03:49:08.153484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.189 qpair failed and we were unable to recover it. 00:38:07.189 [2024-12-13 03:49:08.153555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-12-13 03:49:08.153567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.189 qpair failed and we were unable to recover it. 00:38:07.189 [2024-12-13 03:49:08.153648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-12-13 03:49:08.153661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.189 qpair failed and we were unable to recover it. 00:38:07.189 [2024-12-13 03:49:08.153738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-12-13 03:49:08.153753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.189 qpair failed and we were unable to recover it. 00:38:07.189 [2024-12-13 03:49:08.153838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-12-13 03:49:08.153851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.189 qpair failed and we were unable to recover it. 00:38:07.189 [2024-12-13 03:49:08.153987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-12-13 03:49:08.154000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.189 qpair failed and we were unable to recover it. 00:38:07.189 [2024-12-13 03:49:08.154083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-12-13 03:49:08.154096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.189 qpair failed and we were unable to recover it. 00:38:07.189 [2024-12-13 03:49:08.154172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-12-13 03:49:08.154184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.189 qpair failed and we were unable to recover it. 00:38:07.189 [2024-12-13 03:49:08.154266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-12-13 03:49:08.154279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.189 qpair failed and we were unable to recover it. 00:38:07.189 [2024-12-13 03:49:08.154348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-12-13 03:49:08.154360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.189 qpair failed and we were unable to recover it. 00:38:07.189 [2024-12-13 03:49:08.154436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-12-13 03:49:08.154448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.189 qpair failed and we were unable to recover it. 00:38:07.189 [2024-12-13 03:49:08.154602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-12-13 03:49:08.154614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.189 qpair failed and we were unable to recover it. 00:38:07.189 [2024-12-13 03:49:08.154680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-12-13 03:49:08.154693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.189 qpair failed and we were unable to recover it. 00:38:07.189 [2024-12-13 03:49:08.154828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-12-13 03:49:08.154841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.189 qpair failed and we were unable to recover it. 00:38:07.189 [2024-12-13 03:49:08.154938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-12-13 03:49:08.154951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.189 qpair failed and we were unable to recover it. 00:38:07.189 [2024-12-13 03:49:08.155026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-12-13 03:49:08.155039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.189 qpair failed and we were unable to recover it. 00:38:07.189 [2024-12-13 03:49:08.155191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-12-13 03:49:08.155203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.189 qpair failed and we were unable to recover it. 00:38:07.189 [2024-12-13 03:49:08.155270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-12-13 03:49:08.155283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.189 qpair failed and we were unable to recover it. 00:38:07.189 [2024-12-13 03:49:08.155376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-12-13 03:49:08.155389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.189 qpair failed and we were unable to recover it. 00:38:07.189 [2024-12-13 03:49:08.155471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-12-13 03:49:08.155483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.189 qpair failed and we were unable to recover it. 00:38:07.189 [2024-12-13 03:49:08.155553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-12-13 03:49:08.155567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.189 qpair failed and we were unable to recover it. 00:38:07.189 [2024-12-13 03:49:08.155703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.189 [2024-12-13 03:49:08.155715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.189 qpair failed and we were unable to recover it. 00:38:07.189 [2024-12-13 03:49:08.155790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.155803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.155885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.155906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.156006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.156019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.156240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.156254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.156460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.156501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.156641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.156684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.156837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.156876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.157037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.157082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.157325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.157383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.157581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.157603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.157728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.157755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.157915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.157947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.158043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.158065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.158223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.158245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.158422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.158444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.158533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.158575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.158702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.158744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.158880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.158935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.159130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.159172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.159258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.159273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.159480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.159494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.159643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.159658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.159739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.159752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.159827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.159840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.159942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.159957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.160104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.160117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.160266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.160279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.160418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.160432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.160595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.160631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.160837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.160877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.161050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.161094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.161285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.161299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.161369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.161382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.161468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.161481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.161690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.161704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.161839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.161851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.161942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.161957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.162039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.162052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.162183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.190 [2024-12-13 03:49:08.162197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.190 qpair failed and we were unable to recover it. 00:38:07.190 [2024-12-13 03:49:08.162268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.162280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.162367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.162379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.162471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.162485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.162571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.162584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.162658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.162671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.162757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.162771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.162843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.162855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.162988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.163001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.163113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.163126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.163236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.163280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.163376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.163400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.163559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.163583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.163671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.163687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.163767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.163780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.163928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.163943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.164043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.164057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.164130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.164151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.164221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.164235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.164303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.164315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.164396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.164409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.164479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.164491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.164562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.164576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.164721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.164737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.164871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.164885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.164983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.164997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.165126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.165140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.165230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.165243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.165324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.165337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.165410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.165423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.165494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.165508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.165645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.165659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.165803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.165816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.165880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.165894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.166006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.166019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.166090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.166110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.166255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.166268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.166348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.166361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.166447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.166460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.191 [2024-12-13 03:49:08.166528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.191 [2024-12-13 03:49:08.166547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.191 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.166613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.166625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.166717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.166730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.166887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.166900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.167050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.167064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.167212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.167226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.167303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.167317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.167409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.167423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.167513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.167527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.167606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.167619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.167714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.167727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.167951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.167981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.168074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.168097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.168208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.168234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.168311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.168327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.168404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.168419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.168505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.168518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.168592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.168606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.168698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.168713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.168844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.168858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.168943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.168957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.169102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.169117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.169186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.169199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.169282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.169295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.169401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.169419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.169571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.169585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.169652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.169665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.169736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.169750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.169890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.169903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.169977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.169991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.170077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.170091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.170224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.170236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.170325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.170338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.170425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.170438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.170512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.170525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.170608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.170621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.170769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.170782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.170857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.170869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.170950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.170964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.192 [2024-12-13 03:49:08.171105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.192 [2024-12-13 03:49:08.171119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.192 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.171198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.171210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.171280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.171293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.171429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.171441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.171518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.171532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.171670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.171683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.171878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.171934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.172083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.172124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.172255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.172296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.172480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.172493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.172636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.172650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.172799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.172813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.172903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.172935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.173023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.173046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.173157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.173181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.173420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.173465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.173596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.173637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.173766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.173808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.174001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.174044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.174248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.174290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.174430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.174471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.174750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.174790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.174943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.174985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.175197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.175239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.175368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.175405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.175548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.175564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.175701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.175742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.175952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.175995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.176137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.176180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.176317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.176362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.176502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.176516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.176583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.176606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.176757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.176770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.176909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.176928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.177011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.177025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.177106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.177119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.177246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.177259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.177522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.177563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.177709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.177750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.193 qpair failed and we were unable to recover it. 00:38:07.193 [2024-12-13 03:49:08.178054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.193 [2024-12-13 03:49:08.178100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.194 qpair failed and we were unable to recover it. 00:38:07.194 [2024-12-13 03:49:08.178282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.194 [2024-12-13 03:49:08.178307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.194 qpair failed and we were unable to recover it. 00:38:07.194 [2024-12-13 03:49:08.178414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.194 [2024-12-13 03:49:08.178468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.194 qpair failed and we were unable to recover it. 00:38:07.194 [2024-12-13 03:49:08.178677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.194 [2024-12-13 03:49:08.178718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.194 qpair failed and we were unable to recover it. 00:38:07.194 [2024-12-13 03:49:08.178942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.194 [2024-12-13 03:49:08.178986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.194 qpair failed and we were unable to recover it. 00:38:07.194 [2024-12-13 03:49:08.179138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.194 [2024-12-13 03:49:08.179159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.194 qpair failed and we were unable to recover it. 00:38:07.194 [2024-12-13 03:49:08.179411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.194 [2024-12-13 03:49:08.179458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.194 qpair failed and we were unable to recover it. 00:38:07.194 [2024-12-13 03:49:08.179607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.194 [2024-12-13 03:49:08.179652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.194 qpair failed and we were unable to recover it. 00:38:07.194 [2024-12-13 03:49:08.179906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.194 [2024-12-13 03:49:08.179964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.194 qpair failed and we were unable to recover it. 00:38:07.194 [2024-12-13 03:49:08.180203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.194 [2024-12-13 03:49:08.180248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.194 qpair failed and we were unable to recover it. 00:38:07.194 [2024-12-13 03:49:08.180513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.194 [2024-12-13 03:49:08.180556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.194 qpair failed and we were unable to recover it. 00:38:07.194 [2024-12-13 03:49:08.180751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.194 [2024-12-13 03:49:08.180793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.194 qpair failed and we were unable to recover it. 00:38:07.194 [2024-12-13 03:49:08.180990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.194 [2024-12-13 03:49:08.181032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.194 qpair failed and we were unable to recover it. 00:38:07.194 [2024-12-13 03:49:08.181293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.194 [2024-12-13 03:49:08.181340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.194 qpair failed and we were unable to recover it. 00:38:07.194 [2024-12-13 03:49:08.181507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.194 [2024-12-13 03:49:08.181521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.194 qpair failed and we were unable to recover it. 00:38:07.194 [2024-12-13 03:49:08.181689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.194 [2024-12-13 03:49:08.181703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.194 qpair failed and we were unable to recover it. 00:38:07.194 [2024-12-13 03:49:08.181787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.194 [2024-12-13 03:49:08.181799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.194 qpair failed and we were unable to recover it. 00:38:07.194 [2024-12-13 03:49:08.181993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.194 [2024-12-13 03:49:08.182036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.194 qpair failed and we were unable to recover it. 00:38:07.194 [2024-12-13 03:49:08.182248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.194 [2024-12-13 03:49:08.182288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.194 qpair failed and we were unable to recover it. 00:38:07.194 [2024-12-13 03:49:08.182545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.194 [2024-12-13 03:49:08.182587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.194 qpair failed and we were unable to recover it. 00:38:07.194 [2024-12-13 03:49:08.182730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.194 [2024-12-13 03:49:08.182773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.194 qpair failed and we were unable to recover it. 00:38:07.194 [2024-12-13 03:49:08.183002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.194 [2024-12-13 03:49:08.183050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.194 qpair failed and we were unable to recover it. 00:38:07.194 [2024-12-13 03:49:08.183199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.194 [2024-12-13 03:49:08.183227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.194 qpair failed and we were unable to recover it. 00:38:07.194 [2024-12-13 03:49:08.183417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.194 [2024-12-13 03:49:08.183486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.194 qpair failed and we were unable to recover it. 00:38:07.194 [2024-12-13 03:49:08.183642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.194 [2024-12-13 03:49:08.183687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.194 qpair failed and we were unable to recover it. 00:38:07.194 [2024-12-13 03:49:08.183976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.194 [2024-12-13 03:49:08.184020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.194 qpair failed and we were unable to recover it. 00:38:07.194 [2024-12-13 03:49:08.184319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.194 [2024-12-13 03:49:08.184363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.194 qpair failed and we were unable to recover it. 00:38:07.194 [2024-12-13 03:49:08.184525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.194 [2024-12-13 03:49:08.184567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.194 qpair failed and we were unable to recover it. 00:38:07.194 [2024-12-13 03:49:08.184788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.194 [2024-12-13 03:49:08.184833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.194 qpair failed and we were unable to recover it. 00:38:07.194 [2024-12-13 03:49:08.184994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.194 [2024-12-13 03:49:08.185038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.194 qpair failed and we were unable to recover it. 00:38:07.194 [2024-12-13 03:49:08.185253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.194 [2024-12-13 03:49:08.185293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.194 qpair failed and we were unable to recover it. 00:38:07.194 [2024-12-13 03:49:08.185599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.194 [2024-12-13 03:49:08.185643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.194 qpair failed and we were unable to recover it. 00:38:07.194 [2024-12-13 03:49:08.185936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.194 [2024-12-13 03:49:08.185981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.194 qpair failed and we were unable to recover it. 00:38:07.194 [2024-12-13 03:49:08.186179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.194 [2024-12-13 03:49:08.186203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.194 qpair failed and we were unable to recover it. 00:38:07.194 [2024-12-13 03:49:08.186396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.194 [2024-12-13 03:49:08.186440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.194 qpair failed and we were unable to recover it. 00:38:07.194 [2024-12-13 03:49:08.186633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.186676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.186957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.187007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.187199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.187213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.187293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.187306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.187467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.187480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.187572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.187585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.187725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.187738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.187871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.187885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.187981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.187993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.188148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.188163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.188240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.188252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.188334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.188347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.188436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.188448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.188548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.188561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.188645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.188657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.188734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.188747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.188895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.188908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.188994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.189007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.189088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.189104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.189247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.189261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.189346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.189359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.189443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.189455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.189552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.189564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.189640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.189654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.189719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.189731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.189795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.189807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.189881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.189894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.189994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.190008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.190071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.190083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.190305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.190318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.190393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.190406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.190511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.190524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.190625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.190637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.190779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.190792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.190885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.190897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.190988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.191001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.191149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.191162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.191240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.195 [2024-12-13 03:49:08.191253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.195 qpair failed and we were unable to recover it. 00:38:07.195 [2024-12-13 03:49:08.191337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.191349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.191413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.191426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.191562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.191573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.191661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.191673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.191756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.191768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.191828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.191840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.192031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.192048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.192122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.192140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.192286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.192300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.192386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.192398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.192461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.192475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.192606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.192618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.192715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.192728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.192935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.192949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.193093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.193107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.193175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.193188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.193255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.193267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.193352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.193364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.193453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.193466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.193595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.193608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.193810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.193826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.193905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.193922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.194008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.194020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.194099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.194111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.194179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.194192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.194276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.194289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.194366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.194379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.194448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.194460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.194545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.194557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.194650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.194662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.194738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.194750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.194828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.194840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.194981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.194994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.195066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.195079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.195153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.195165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.195256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.195268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.195343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.195356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.195496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.195509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.195651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.196 [2024-12-13 03:49:08.195664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.196 qpair failed and we were unable to recover it. 00:38:07.196 [2024-12-13 03:49:08.195736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.195747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.195892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.195905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.196059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.196072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.196160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.196173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.196350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.196363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.196434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.196451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.196597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.196611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.196769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.196782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.196861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.196875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.197017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.197031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.197189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.197202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.197280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.197292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.197385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.197398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.197548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.197561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.197634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.197646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.197734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.197748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.197834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.197847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.197983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.197997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.198137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.198151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.198221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.198232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.198313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.198328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.198413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.198429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.198567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.198580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.198650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.198663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.198747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.198759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.198892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.198906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.198986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.199000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.199081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.199094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.199241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.199254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.199352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.199365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.199454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.199468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.199570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.199585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.199677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.199695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.199832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.199845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.199911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.199930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.200011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.200024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.200094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.200106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.200190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.197 [2024-12-13 03:49:08.200204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.197 qpair failed and we were unable to recover it. 00:38:07.197 [2024-12-13 03:49:08.200284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.198 [2024-12-13 03:49:08.200297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.198 qpair failed and we were unable to recover it. 00:38:07.198 [2024-12-13 03:49:08.200377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.198 [2024-12-13 03:49:08.200390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.198 qpair failed and we were unable to recover it. 00:38:07.198 [2024-12-13 03:49:08.200464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.198 [2024-12-13 03:49:08.200478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.198 qpair failed and we were unable to recover it. 00:38:07.198 [2024-12-13 03:49:08.200544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.198 [2024-12-13 03:49:08.200557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.198 qpair failed and we were unable to recover it. 00:38:07.198 [2024-12-13 03:49:08.200637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.198 [2024-12-13 03:49:08.200650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.198 qpair failed and we were unable to recover it. 00:38:07.198 [2024-12-13 03:49:08.200791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.198 [2024-12-13 03:49:08.200804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.198 qpair failed and we were unable to recover it. 00:38:07.198 [2024-12-13 03:49:08.201012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.198 [2024-12-13 03:49:08.201026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.198 qpair failed and we were unable to recover it. 00:38:07.198 [2024-12-13 03:49:08.201113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.198 [2024-12-13 03:49:08.201126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.198 qpair failed and we were unable to recover it. 00:38:07.198 [2024-12-13 03:49:08.201217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.198 [2024-12-13 03:49:08.201231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.198 qpair failed and we were unable to recover it. 00:38:07.198 [2024-12-13 03:49:08.201365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.198 [2024-12-13 03:49:08.201378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.198 qpair failed and we were unable to recover it. 00:38:07.198 [2024-12-13 03:49:08.201447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.198 [2024-12-13 03:49:08.201460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.198 qpair failed and we were unable to recover it. 00:38:07.198 [2024-12-13 03:49:08.201606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.198 [2024-12-13 03:49:08.201618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.198 qpair failed and we were unable to recover it. 00:38:07.198 [2024-12-13 03:49:08.201754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.198 [2024-12-13 03:49:08.201768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.198 qpair failed and we were unable to recover it. 00:38:07.198 [2024-12-13 03:49:08.201904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.198 [2024-12-13 03:49:08.201932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.198 qpair failed and we were unable to recover it. 00:38:07.198 [2024-12-13 03:49:08.202009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.198 [2024-12-13 03:49:08.202023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.198 qpair failed and we were unable to recover it. 00:38:07.198 [2024-12-13 03:49:08.202109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.198 [2024-12-13 03:49:08.202123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.198 qpair failed and we were unable to recover it. 00:38:07.198 [2024-12-13 03:49:08.202203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.198 [2024-12-13 03:49:08.202217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.198 qpair failed and we were unable to recover it. 00:38:07.198 [2024-12-13 03:49:08.202288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.198 [2024-12-13 03:49:08.202301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.198 qpair failed and we were unable to recover it. 00:38:07.198 [2024-12-13 03:49:08.202381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.198 [2024-12-13 03:49:08.202394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.198 qpair failed and we were unable to recover it. 00:38:07.198 [2024-12-13 03:49:08.202462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.198 [2024-12-13 03:49:08.202475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.198 qpair failed and we were unable to recover it. 00:38:07.198 [2024-12-13 03:49:08.202610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.198 [2024-12-13 03:49:08.202623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.198 qpair failed and we were unable to recover it. 00:38:07.198 [2024-12-13 03:49:08.202757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.198 [2024-12-13 03:49:08.202771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.198 qpair failed and we were unable to recover it. 00:38:07.198 [2024-12-13 03:49:08.202912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.198 [2024-12-13 03:49:08.202939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.198 qpair failed and we were unable to recover it. 00:38:07.198 [2024-12-13 03:49:08.203006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.198 [2024-12-13 03:49:08.203021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.198 qpair failed and we were unable to recover it. 00:38:07.198 [2024-12-13 03:49:08.203178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.198 [2024-12-13 03:49:08.203191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.198 qpair failed and we were unable to recover it. 00:38:07.198 [2024-12-13 03:49:08.203275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.198 [2024-12-13 03:49:08.203288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.198 qpair failed and we were unable to recover it. 00:38:07.198 [2024-12-13 03:49:08.203362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.198 [2024-12-13 03:49:08.203376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.198 qpair failed and we were unable to recover it. 00:38:07.198 [2024-12-13 03:49:08.203444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.198 [2024-12-13 03:49:08.203457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.198 qpair failed and we were unable to recover it. 00:38:07.198 [2024-12-13 03:49:08.203545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.198 [2024-12-13 03:49:08.203559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.198 qpair failed and we were unable to recover it. 00:38:07.198 [2024-12-13 03:49:08.203713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.198 [2024-12-13 03:49:08.203726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.198 qpair failed and we were unable to recover it. 00:38:07.198 [2024-12-13 03:49:08.203792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.198 [2024-12-13 03:49:08.203804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.198 qpair failed and we were unable to recover it. 00:38:07.198 [2024-12-13 03:49:08.203894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.198 [2024-12-13 03:49:08.203906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.198 qpair failed and we were unable to recover it. 00:38:07.198 [2024-12-13 03:49:08.203993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.198 [2024-12-13 03:49:08.204006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.198 qpair failed and we were unable to recover it. 00:38:07.198 [2024-12-13 03:49:08.204151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.198 [2024-12-13 03:49:08.204164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.198 qpair failed and we were unable to recover it. 00:38:07.198 [2024-12-13 03:49:08.204248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.198 [2024-12-13 03:49:08.204261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.198 qpair failed and we were unable to recover it. 00:38:07.198 [2024-12-13 03:49:08.204357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.198 [2024-12-13 03:49:08.204370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.204528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.204542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.204628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.204641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.204718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.204732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.204818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.204831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.204915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.204935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.205008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.205022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.205091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.205104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.205196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.205210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.205285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.205298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.205372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.205386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.205457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.205469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.205558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.205571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.205770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.205783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.205866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.205879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.205965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.205979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.206044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.206056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.206139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.206153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.206230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.206243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.206375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.206388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.206465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.206479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.206620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.206633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.206798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.206813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.206881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.206899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.206981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.207002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.207147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.207161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.207242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.207255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.207333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.207346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.207427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.207442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.207519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.207532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.207604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.207618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.207702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.207717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.207870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.207883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.207966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.207979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.208044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.208056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.208258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.208271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.208357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.208371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.208439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.208452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.208604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.208618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.208689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.208703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.208781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.208800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.199 qpair failed and we were unable to recover it. 00:38:07.199 [2024-12-13 03:49:08.208878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.199 [2024-12-13 03:49:08.208891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.208974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.208988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.209094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.209107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.209194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.209207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.209291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.209306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.209389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.209402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.209486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.209499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.209578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.209590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.209732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.209745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.209883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.209896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.209979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.209992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.210081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.210094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.210180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.210193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.210285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.210297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.210434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.210447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.210532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.210545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.210605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.210619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.210707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.210721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.210803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.210816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.210889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.210902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.210987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.211001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.211067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.211080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.211224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.211237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.211314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.211332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.211411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.211424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.211627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.211641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.211719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.211733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.211871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.211886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.211969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.211982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.212057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.212070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.212154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.212168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.212320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.212333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.212403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.212417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.212493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.212506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.212588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.212601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.212671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.212684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.212822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.212835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.213013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.213056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.213255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.213297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.213435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.213477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.213614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.213628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.213726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.213747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.200 [2024-12-13 03:49:08.213889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.200 [2024-12-13 03:49:08.213902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.200 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.213991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.214015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.214119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.214132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.214310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.214324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.214470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.214488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.214565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.214577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.214779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.214792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.214884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.214897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.215058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.215073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.215151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.215163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.215253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.215266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.215398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.215411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.215509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.215522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.215610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.215623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.215713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.215726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.215884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.215898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.215986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.215999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.216064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.216077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.216165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.216178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.216251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.216264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.216409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.216422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.216560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.216573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.216724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.216738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.216817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.216837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.217014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.217028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.217255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.217272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.217442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.217456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.217524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.217537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.217679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.217693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.217773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.217787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.217869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.217881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.217969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.217982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.218050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.218063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.218205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.218218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.218304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.218318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.218482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.218496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.218585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.218598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.218751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.218764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.218950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.218993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.219219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.219262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.219394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.219437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.219581] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:38:07.201 [2024-12-13 03:49:08.219652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.201 [2024-12-13 03:49:08.219668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.201 qpair failed and we were unable to recover it. 00:38:07.201 [2024-12-13 03:49:08.219685] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:07.202 [2024-12-13 03:49:08.219821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.219836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.219932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.219944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.220119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.220133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.220348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.220362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.220503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.220516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.220669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.220682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.220864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.220906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.221076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.221118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.221350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.221391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.221515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.221530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.221744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.221757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.221831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.221846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.221935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.221950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.222087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.222099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.222340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.222383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.222519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.222562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.222768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.222810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.223037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.223080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.223266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.223310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.223517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.223557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.223756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.223799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.223992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.224036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.224220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.224238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.224306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.224319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.224484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.224498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.224652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.224699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.224865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.224941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.225215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.225294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.225646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.225731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.225970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.226029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.226295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.226339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.226538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.226580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.226756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.226769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.226848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.226861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.226995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.227009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.227174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.227217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.227484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.227526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.227734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.202 [2024-12-13 03:49:08.227775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.202 qpair failed and we were unable to recover it. 00:38:07.202 [2024-12-13 03:49:08.227913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.227969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.228123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.228169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.228366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.228380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.228587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.228600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.228669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.228681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.228764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.228777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.228975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.229018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.229225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.229267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.229473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.229515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.229779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.229792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.229929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.229943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.230034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.230048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.230211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.230225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.230320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.230332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.230480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.230495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.230579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.230593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.230677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.230690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.230777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.230790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.230875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.230887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.230971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.230991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.231193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.231207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.231276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.231287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.231354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.231366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.231457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.231471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.231546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.231562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.231634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.231654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.231736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.231749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.231819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.231833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.232006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.232020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.232163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.232176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.232259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.232273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.232497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.232511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.232606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.232620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.232784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.232797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.232999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.233013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.233082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.233094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.233255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.233268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.233441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.233454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.233550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.233563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.233659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.203 [2024-12-13 03:49:08.233672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.203 qpair failed and we were unable to recover it. 00:38:07.203 [2024-12-13 03:49:08.233761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.233776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.233859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.233872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.233952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.233967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.234138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.234151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.234213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.234226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.234394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.234407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.234491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.234505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.234657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.234671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.234835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.234848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.234936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.234950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.235130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.235171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.235388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.235430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.235654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.235696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.235946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.236014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.236267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.236340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.236507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.236561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.236727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.236743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.236860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.236898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.237052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.237106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.237325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.237367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.237512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.237554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.237752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.237793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.237956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.238000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.238148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.238190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.238337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.238384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.238597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.238639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.238767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.238807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.239024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.239068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.239266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.239280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.239420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.239433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.239575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.239588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.239741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.239755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.239912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.239932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.240008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.240020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.240095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.240109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.240246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.240259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.240411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.240425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.240583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.240596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.240751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.240767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.240854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.240868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.240947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.240959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.241105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.204 [2024-12-13 03:49:08.241120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.204 qpair failed and we were unable to recover it. 00:38:07.204 [2024-12-13 03:49:08.241257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.241270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.241342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.241354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.241434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.241446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.241526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.241538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.241634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.241648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.241739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.241752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.241828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.241843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.241983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.241997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.242082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.242095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.242239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.242255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.242327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.242340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.242405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.242419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.242507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.242520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.242586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.242599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.242681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.242694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.242770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.242783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.242867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.242880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.243030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.243043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.243206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.243222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.243310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.243322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.243407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.243421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.243567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.243580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.243659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.243674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.243762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.243775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.243866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.243878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.244016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.244030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.244183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.244196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.244280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.244293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.244386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.244398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.244458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.244470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.244609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.244622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.244693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.244706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.244837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.244849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.245352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.245376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.245636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.245684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.245829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.245871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.246221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.246264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.246366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.246387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.246542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.246563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.246718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.246744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.247024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.247069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.247328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.205 [2024-12-13 03:49:08.247349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.205 qpair failed and we were unable to recover it. 00:38:07.205 [2024-12-13 03:49:08.247437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.247458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.247560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.247581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.247749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.247770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.247868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.247937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.248159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.248201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.248405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.248445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.248610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.248653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.248811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.248903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.249243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.249287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.249422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.249465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.249680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.249723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.249956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.250000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.250154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.250195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.250468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.250482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.250568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.250582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.250676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.250720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.250954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.250999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.251161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.251204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.251435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.251459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.251650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.251672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.251769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.251794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.251885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.251906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.252033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.252054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.252161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.252181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.252368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.252383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.252496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.252509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.252584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.252598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.252683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.252697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.252840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.252856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.252953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.252968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.253059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.253073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.253161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.253176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.253257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.253270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.253347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.253361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.253442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.253455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.253604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.253618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.253683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.253696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.253793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.253806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.253895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.253909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.254004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.254019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.254107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.254123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.254206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.206 [2024-12-13 03:49:08.254220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.206 qpair failed and we were unable to recover it. 00:38:07.206 [2024-12-13 03:49:08.254297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.254309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.254380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.254393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.254560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.254602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.254754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.254795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.254995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.255041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.255176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.255217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.255356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.255403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.255604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.255651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.255816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.255836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.256056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.256078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.256268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.256311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.256505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.256547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.256677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.256717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.257036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.257082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.257229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.257270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.257461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.257483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.257640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.257682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.258001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.258046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.258236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.258295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.258512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.258533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.258705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.258726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.258990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.259036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.259280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.259322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.259470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.259484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.259662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.259710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.259940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.259984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.260198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.260238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.260432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.260447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.260666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.260708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.260933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.260975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.261216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.261260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.261527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.261568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.261730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.261773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.261964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.262007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.262218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.262260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.262400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.262441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.262594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.262641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.262775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.262797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.207 [2024-12-13 03:49:08.262969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.207 [2024-12-13 03:49:08.262990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.207 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.263134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.263149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.263302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.263315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.263405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.263418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.263576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.263617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.263771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.263814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.263950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.263994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.264131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.264184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.264381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.264394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.264629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.264670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.264890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.264945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.265105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.265149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.265407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.265450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.265730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.265772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.265984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.266028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.266279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.266322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.266481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.266522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.266783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.266825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.266974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.267016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.267250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.267292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.267491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.267516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.267614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.267637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.267814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.267830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.267989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.268003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.268175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.268217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.268414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.268456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.268627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.268670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.268780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.268798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.268878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.268892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.269055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.269069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.269140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.269167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.269292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.269333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.269550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.269594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.269724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.269768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.269971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.270015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.270230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.270243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.270386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.270401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.270542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.270556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.270690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.270705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.270797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.270816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.270908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.270937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.271074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.271088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.208 [2024-12-13 03:49:08.271322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.208 [2024-12-13 03:49:08.271336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.208 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.271485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.271498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.271594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.271608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.271691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.271706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.271797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.271810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.271902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.271923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.272062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.272077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.272162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.272177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.272272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.272288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.272423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.272437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.272579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.272593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.272675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.272688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.272765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.272779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.272941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.272957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.273053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.273072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.273152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.273171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.273277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.273291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.273442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.273456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.273545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.273560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.273660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.273674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.273930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.273944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.274086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.274102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.274199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.274213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.274417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.274431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.274583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.274627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.274761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.274803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.274943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.274988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.275172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.275215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.275451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.275466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.275618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.275632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.275780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.275794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.275952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.275996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.276203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.276247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.276390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.276404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.276493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.276507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.276641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.276654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.276735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.276750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.276816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.276828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.276970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.276986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.277130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.277144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.277368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.277409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.277605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.277648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.277853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.209 [2024-12-13 03:49:08.277895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.209 qpair failed and we were unable to recover it. 00:38:07.209 [2024-12-13 03:49:08.278104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.278146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.278373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.278415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.278611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.278625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.278774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.278788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.278936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.278950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.279107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.279150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.279289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.279337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.279491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.279533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.279739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.279760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.279850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.279872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.279975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.279996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.280156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.280176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.280321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.280344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.280435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.280449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.280542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.280555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.280628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.280656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.280876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.280932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.281142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.281184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.281322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.281363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.281499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.281529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.281667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.281681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.281933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.281977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.282183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.282226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.282406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.282420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.282576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.282590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.282745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.282787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.283019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.283062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.283258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.283298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.283429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.283443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.283528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.283543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.283628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.283641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.283745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.283795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.283951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.283996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.284195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.284238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.284354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.284368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.284522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.284536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.284608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.284622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.284849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.284870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.285012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.285027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.285166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.285180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.285335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.285351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.285415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.210 [2024-12-13 03:49:08.285429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.210 qpair failed and we were unable to recover it. 00:38:07.210 [2024-12-13 03:49:08.285533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.211 [2024-12-13 03:49:08.285557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.211 qpair failed and we were unable to recover it. 00:38:07.211 [2024-12-13 03:49:08.285649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.211 [2024-12-13 03:49:08.285671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.211 qpair failed and we were unable to recover it. 00:38:07.211 [2024-12-13 03:49:08.285816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.211 [2024-12-13 03:49:08.285837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.211 qpair failed and we were unable to recover it. 00:38:07.211 [2024-12-13 03:49:08.285983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.211 [2024-12-13 03:49:08.286000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.211 qpair failed and we were unable to recover it. 00:38:07.211 [2024-12-13 03:49:08.286088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.211 [2024-12-13 03:49:08.286101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.211 qpair failed and we were unable to recover it. 00:38:07.211 [2024-12-13 03:49:08.286278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.211 [2024-12-13 03:49:08.286322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.211 qpair failed and we were unable to recover it. 00:38:07.211 [2024-12-13 03:49:08.286530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.211 [2024-12-13 03:49:08.286572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.211 qpair failed and we were unable to recover it. 00:38:07.211 [2024-12-13 03:49:08.286738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.211 [2024-12-13 03:49:08.286781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.211 qpair failed and we were unable to recover it. 00:38:07.211 [2024-12-13 03:49:08.286985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.211 [2024-12-13 03:49:08.287029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.211 qpair failed and we were unable to recover it. 00:38:07.211 [2024-12-13 03:49:08.287173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.211 [2024-12-13 03:49:08.287217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.211 qpair failed and we were unable to recover it. 00:38:07.211 [2024-12-13 03:49:08.287412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.211 [2024-12-13 03:49:08.287455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.211 qpair failed and we were unable to recover it. 00:38:07.211 [2024-12-13 03:49:08.287655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.211 [2024-12-13 03:49:08.287697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.211 qpair failed and we were unable to recover it. 00:38:07.211 [2024-12-13 03:49:08.287830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.211 [2024-12-13 03:49:08.287871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.211 qpair failed and we were unable to recover it. 00:38:07.211 [2024-12-13 03:49:08.288030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.211 [2024-12-13 03:49:08.288080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.211 qpair failed and we were unable to recover it. 00:38:07.211 [2024-12-13 03:49:08.288214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.211 [2024-12-13 03:49:08.288255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.211 qpair failed and we were unable to recover it. 00:38:07.211 [2024-12-13 03:49:08.288475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.211 [2024-12-13 03:49:08.288519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.211 qpair failed and we were unable to recover it. 00:38:07.211 [2024-12-13 03:49:08.288726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.211 [2024-12-13 03:49:08.288741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.211 qpair failed and we were unable to recover it. 00:38:07.211 [2024-12-13 03:49:08.288893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.211 [2024-12-13 03:49:08.288905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.211 qpair failed and we were unable to recover it. 00:38:07.211 [2024-12-13 03:49:08.289069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.211 [2024-12-13 03:49:08.289084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.211 qpair failed and we were unable to recover it. 00:38:07.211 [2024-12-13 03:49:08.289272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.211 [2024-12-13 03:49:08.289286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.211 qpair failed and we were unable to recover it. 00:38:07.211 [2024-12-13 03:49:08.289440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.211 [2024-12-13 03:49:08.289481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.211 qpair failed and we were unable to recover it. 00:38:07.211 [2024-12-13 03:49:08.289692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.211 [2024-12-13 03:49:08.289733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.211 qpair failed and we were unable to recover it. 00:38:07.211 [2024-12-13 03:49:08.289945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.211 [2024-12-13 03:49:08.289988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.211 qpair failed and we were unable to recover it. 00:38:07.211 [2024-12-13 03:49:08.290123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.211 [2024-12-13 03:49:08.290164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.211 qpair failed and we were unable to recover it. 00:38:07.211 [2024-12-13 03:49:08.290301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.211 [2024-12-13 03:49:08.290348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.211 qpair failed and we were unable to recover it. 00:38:07.211 [2024-12-13 03:49:08.290448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.211 [2024-12-13 03:49:08.290469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.290552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.290565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.290714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.290755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.290948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.290991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.291154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.291195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.291462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.291504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.291638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.291678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.291828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.291870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.292034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.292077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.292293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.292335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.292483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.292524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.292677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.292694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.292849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.292867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.293091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.293178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.293334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.293381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.293602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.293623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.293708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.293730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.293897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.293928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.294097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.294120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.294302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.294318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.294468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.294482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.294555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.294568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.294654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.294667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.294754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.294767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.294866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.294881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.295022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.295037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.295110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.295123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.295212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.295227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.295384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.295432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.295583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.295625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.295773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.295817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.296103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.296148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.296296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.296339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.296470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.296512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.296642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.296684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.296808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.296823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.296910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.296941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.297119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.297135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.297231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.297246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.297313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.212 [2024-12-13 03:49:08.297327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.212 qpair failed and we were unable to recover it. 00:38:07.212 [2024-12-13 03:49:08.297487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.297501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.297711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.297724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.297808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.297823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.297980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.297993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.298127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.298142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.298302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.298336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.298401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.298414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.298547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.298564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.298702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.298716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.298851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.298893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.299047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.299088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.299234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.299277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.299564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.299608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.299733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.299774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.299903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.299959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.300171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.300222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.300474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.300537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.300743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.300767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.300982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.300998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.301081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.301094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.301182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.301201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.301283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.301296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.301389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.301405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.301486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.301501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.301703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.301717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.301855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.301868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.302019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.302033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.302118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.302132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.302230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.302246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.302322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.302348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.302434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.302448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.302515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.302528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.302613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.302627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.302714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.302729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.302788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.302802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.302873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.302888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.302965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.302979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.303052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.303064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.303194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.303209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.303291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.303304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.303454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.303469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.303572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.213 [2024-12-13 03:49:08.303586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.213 qpair failed and we were unable to recover it. 00:38:07.213 [2024-12-13 03:49:08.303657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.303671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.303744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.303758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.303853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.303868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.303949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.303973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.304043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.304058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.304137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.304150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.304303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.304316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.304393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.304407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.304637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.304652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.304803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.304818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.304876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.304890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.304993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.305009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.305161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.305174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.305283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.305312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.305537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.305559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.305731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.305754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.305861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.305876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.305962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.305977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.306117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.306131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.306197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.306210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.306435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.306449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.306593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.306610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.306678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.306691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.306769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.306784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.306912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.306932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.307008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.307021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.307161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.307190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.307396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.307410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.307496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.307509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.307680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.307694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.307762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.307775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.307914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.307942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.308076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.308089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.308182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.308196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.308274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.308288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.308372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.308386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.308459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.308473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.308621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.308636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.308734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.308748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.308874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.308887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.308964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.308977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.309128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.214 [2024-12-13 03:49:08.309141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.214 qpair failed and we were unable to recover it. 00:38:07.214 [2024-12-13 03:49:08.309220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.215 [2024-12-13 03:49:08.309234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.215 qpair failed and we were unable to recover it. 00:38:07.215 [2024-12-13 03:49:08.309367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.215 [2024-12-13 03:49:08.309393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.215 qpair failed and we were unable to recover it. 00:38:07.215 [2024-12-13 03:49:08.309512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.215 [2024-12-13 03:49:08.309527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.215 qpair failed and we were unable to recover it. 00:38:07.215 [2024-12-13 03:49:08.309604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.215 [2024-12-13 03:49:08.309619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.215 qpair failed and we were unable to recover it. 00:38:07.215 [2024-12-13 03:49:08.309771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.215 [2024-12-13 03:49:08.309786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.215 qpair failed and we were unable to recover it. 00:38:07.215 [2024-12-13 03:49:08.309871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.215 [2024-12-13 03:49:08.309885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.215 qpair failed and we were unable to recover it. 00:38:07.215 [2024-12-13 03:49:08.309973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.215 [2024-12-13 03:49:08.309988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.215 qpair failed and we were unable to recover it. 00:38:07.215 [2024-12-13 03:49:08.310137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.215 [2024-12-13 03:49:08.310152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.215 qpair failed and we were unable to recover it. 00:38:07.215 [2024-12-13 03:49:08.310289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.215 [2024-12-13 03:49:08.310304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.215 qpair failed and we were unable to recover it. 00:38:07.215 [2024-12-13 03:49:08.310454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.215 [2024-12-13 03:49:08.310469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.215 qpair failed and we were unable to recover it. 00:38:07.215 [2024-12-13 03:49:08.310614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.215 [2024-12-13 03:49:08.310628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.215 qpair failed and we were unable to recover it. 00:38:07.215 [2024-12-13 03:49:08.310829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.215 [2024-12-13 03:49:08.310846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.215 qpair failed and we were unable to recover it. 00:38:07.215 [2024-12-13 03:49:08.310935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.215 [2024-12-13 03:49:08.310948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.215 qpair failed and we were unable to recover it. 00:38:07.215 [2024-12-13 03:49:08.311145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.215 [2024-12-13 03:49:08.311158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.215 qpair failed and we were unable to recover it. 00:38:07.215 [2024-12-13 03:49:08.311226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.215 [2024-12-13 03:49:08.311239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.215 qpair failed and we were unable to recover it. 00:38:07.215 [2024-12-13 03:49:08.311327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.215 [2024-12-13 03:49:08.311340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.215 qpair failed and we were unable to recover it. 00:38:07.215 [2024-12-13 03:49:08.311487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.215 [2024-12-13 03:49:08.311501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.215 qpair failed and we were unable to recover it. 00:38:07.215 [2024-12-13 03:49:08.311570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.215 [2024-12-13 03:49:08.311584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.215 qpair failed and we were unable to recover it. 00:38:07.215 [2024-12-13 03:49:08.311647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.215 [2024-12-13 03:49:08.311660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.215 qpair failed and we were unable to recover it. 00:38:07.215 [2024-12-13 03:49:08.311734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.215 [2024-12-13 03:49:08.311749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.215 qpair failed and we were unable to recover it. 00:38:07.215 [2024-12-13 03:49:08.311882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.215 [2024-12-13 03:49:08.311896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.215 qpair failed and we were unable to recover it. 00:38:07.215 [2024-12-13 03:49:08.311966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.215 [2024-12-13 03:49:08.311980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.215 qpair failed and we were unable to recover it. 00:38:07.215 [2024-12-13 03:49:08.312049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.215 [2024-12-13 03:49:08.312069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.215 qpair failed and we were unable to recover it. 00:38:07.215 [2024-12-13 03:49:08.312170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.215 [2024-12-13 03:49:08.312183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.215 qpair failed and we were unable to recover it. 00:38:07.215 [2024-12-13 03:49:08.312263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.215 [2024-12-13 03:49:08.312278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.215 qpair failed and we were unable to recover it. 00:38:07.215 [2024-12-13 03:49:08.312367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.215 [2024-12-13 03:49:08.312381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.215 qpair failed and we were unable to recover it. 00:38:07.215 [2024-12-13 03:49:08.312533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.215 [2024-12-13 03:49:08.312548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.215 qpair failed and we were unable to recover it. 00:38:07.215 [2024-12-13 03:49:08.312636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.215 [2024-12-13 03:49:08.312651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.215 qpair failed and we were unable to recover it. 00:38:07.215 [2024-12-13 03:49:08.312787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.215 [2024-12-13 03:49:08.312805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.215 qpair failed and we were unable to recover it. 00:38:07.215 [2024-12-13 03:49:08.312942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.215 [2024-12-13 03:49:08.312956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.215 qpair failed and we were unable to recover it. 00:38:07.215 [2024-12-13 03:49:08.313034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.215 [2024-12-13 03:49:08.313048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.215 qpair failed and we were unable to recover it. 00:38:07.215 [2024-12-13 03:49:08.313120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.215 [2024-12-13 03:49:08.313133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.215 qpair failed and we were unable to recover it. 00:38:07.215 [2024-12-13 03:49:08.313280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.215 [2024-12-13 03:49:08.313294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.215 qpair failed and we were unable to recover it. 00:38:07.215 [2024-12-13 03:49:08.313374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.215 [2024-12-13 03:49:08.313387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.215 qpair failed and we were unable to recover it. 00:38:07.215 [2024-12-13 03:49:08.313538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.215 [2024-12-13 03:49:08.313552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.215 qpair failed and we were unable to recover it. 00:38:07.215 [2024-12-13 03:49:08.313633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.215 [2024-12-13 03:49:08.313647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.215 qpair failed and we were unable to recover it. 00:38:07.215 [2024-12-13 03:49:08.313715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.215 [2024-12-13 03:49:08.313729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.313803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.313818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.314028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.314043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.314182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.314196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.314321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.314335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.314413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.314426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.314559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.314573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.314634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.314648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.314747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.314761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.314843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.314857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.315027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.315040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.315116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.315130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.315209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.315225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.315433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.315446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.315517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.315531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.315602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.315619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.315764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.315778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.315882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.315895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.315989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.316003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.316094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.316107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.316341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.316355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.316588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.316602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.316746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.316759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.316841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.316855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.316952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.316966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.317048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.317061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.317206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.317220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.317354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.317368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.317516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.317530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.317626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.317640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.317708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.317721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.317807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.317820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.317907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.317929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.318078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.318093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.318232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.318247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.318330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.318344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.318419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.318433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.318632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.318646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.318771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.318784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.318935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.318949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.319029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.319048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.319202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.319215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.216 qpair failed and we were unable to recover it. 00:38:07.216 [2024-12-13 03:49:08.319293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.216 [2024-12-13 03:49:08.319308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.319382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.319396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.319486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.319501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.319598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.319611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.319746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.319762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.319848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.319861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.319931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.319945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.320016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.320030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.320183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.320197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.320342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.320358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.320502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.320516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.320726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.320740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.320941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.320958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.321049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.321065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.321206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.321227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.321423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.321437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.321608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.321621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.321701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.321715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.321808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.321822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.321904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.321923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.321990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.322005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.322113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.322126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.322198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.322210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.322292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.322306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.322442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.322455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.322595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.322608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.322740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.322752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.322889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.322901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.323041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.323055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.323138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.323151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.323231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.323244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.323322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.323335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.323405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.323417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.323494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.323507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.323588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.323602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.323741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.323753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.323907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.323928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.324035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.324048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.324113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.324125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.324204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.324216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.324283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.324296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.324431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.324445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.217 qpair failed and we were unable to recover it. 00:38:07.217 [2024-12-13 03:49:08.324554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.217 [2024-12-13 03:49:08.324567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.324716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.324729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.324857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.324870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.325016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.325031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.325126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.325141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.325238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.325250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.325381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.325395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.325465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.325478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.325574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.325587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.325724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.325737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.325879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.325895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.325976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.325991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.326157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.326172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.326248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.326262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.326348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.326361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.326450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.326462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.326536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.326548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.326716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.326730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.326818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.326831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.326984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.326997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.327089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.327103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.327184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.327196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.327324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.327338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.327465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.327478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.327701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.327715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.327803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.327816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.327882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.327895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.328042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.328055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.328149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.328163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.328319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.328335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.328515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.328528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.328593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.328607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.328674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.328687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.328895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.328909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.328985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.328998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.329088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.329101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.329181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.329202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.329286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.329299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.329441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.329454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.329538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.329550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.329788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.329803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.329881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.329894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.218 [2024-12-13 03:49:08.329974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.218 [2024-12-13 03:49:08.329987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.218 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.330067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.330080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.330155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.330169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.330248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.330262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.330330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.330343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.330438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.330451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.330587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.330599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.330678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.330690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.330829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.330841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.330997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.331017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.331085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.331097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.331266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.331281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.331411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.331425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.331506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.331518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.331600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.331613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.331698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.331710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.331792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.331804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.331959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.331972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.332135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.332150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.332209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.332223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.332301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.332313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.332408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.332420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.332518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.332532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.332625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.332638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.332787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.332806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.332961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.332976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.333110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.333123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.333219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.333232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.333298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.333311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.333439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.333452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.333604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.333618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.333701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.333715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.333795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.333807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.333880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.333892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.334034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.334047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.334124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.334137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.334212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.334224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.334311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.334324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.334395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.334407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.334509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.334523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.334667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.219 [2024-12-13 03:49:08.334679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.219 qpair failed and we were unable to recover it. 00:38:07.219 [2024-12-13 03:49:08.334819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.220 [2024-12-13 03:49:08.334834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.220 qpair failed and we were unable to recover it. 00:38:07.220 [2024-12-13 03:49:08.334967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.220 [2024-12-13 03:49:08.334980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.220 qpair failed and we were unable to recover it. 00:38:07.220 [2024-12-13 03:49:08.335158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.220 [2024-12-13 03:49:08.335174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.220 qpair failed and we were unable to recover it. 00:38:07.220 [2024-12-13 03:49:08.335316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.220 [2024-12-13 03:49:08.335330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.220 qpair failed and we were unable to recover it. 00:38:07.220 [2024-12-13 03:49:08.335411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.220 [2024-12-13 03:49:08.335424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.220 qpair failed and we were unable to recover it. 00:38:07.220 [2024-12-13 03:49:08.335565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.220 [2024-12-13 03:49:08.335579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.220 qpair failed and we were unable to recover it. 00:38:07.220 [2024-12-13 03:49:08.335659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.220 [2024-12-13 03:49:08.335672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.220 qpair failed and we were unable to recover it. 00:38:07.220 [2024-12-13 03:49:08.335812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.220 [2024-12-13 03:49:08.335826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.220 qpair failed and we were unable to recover it. 00:38:07.220 [2024-12-13 03:49:08.335960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.220 [2024-12-13 03:49:08.335988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.220 qpair failed and we were unable to recover it. 00:38:07.220 [2024-12-13 03:49:08.336124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.220 [2024-12-13 03:49:08.336139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.220 qpair failed and we were unable to recover it. 00:38:07.220 [2024-12-13 03:49:08.336348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.220 [2024-12-13 03:49:08.336362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.220 qpair failed and we were unable to recover it. 00:38:07.220 [2024-12-13 03:49:08.336431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.220 [2024-12-13 03:49:08.336444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.220 qpair failed and we were unable to recover it. 00:38:07.220 [2024-12-13 03:49:08.336521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.220 [2024-12-13 03:49:08.336533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.220 qpair failed and we were unable to recover it. 00:38:07.220 [2024-12-13 03:49:08.336691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.220 [2024-12-13 03:49:08.336705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.220 qpair failed and we were unable to recover it. 00:38:07.220 [2024-12-13 03:49:08.336890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.220 [2024-12-13 03:49:08.336904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.220 qpair failed and we were unable to recover it. 00:38:07.220 [2024-12-13 03:49:08.337085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.220 [2024-12-13 03:49:08.337101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.220 qpair failed and we were unable to recover it. 00:38:07.220 [2024-12-13 03:49:08.337215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.220 [2024-12-13 03:49:08.337232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.220 qpair failed and we were unable to recover it. 00:38:07.220 [2024-12-13 03:49:08.337367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.220 [2024-12-13 03:49:08.337385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.220 qpair failed and we were unable to recover it. 00:38:07.220 [2024-12-13 03:49:08.337470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.220 [2024-12-13 03:49:08.337483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.220 qpair failed and we were unable to recover it. 00:38:07.220 [2024-12-13 03:49:08.337590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.220 [2024-12-13 03:49:08.337602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.220 qpair failed and we were unable to recover it. 00:38:07.220 [2024-12-13 03:49:08.337762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.220 [2024-12-13 03:49:08.337778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.220 qpair failed and we were unable to recover it. 00:38:07.220 [2024-12-13 03:49:08.337977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.220 [2024-12-13 03:49:08.337991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.220 qpair failed and we were unable to recover it. 00:38:07.220 [2024-12-13 03:49:08.338123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.220 [2024-12-13 03:49:08.338137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.220 qpair failed and we were unable to recover it. 00:38:07.220 [2024-12-13 03:49:08.338288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.220 [2024-12-13 03:49:08.338302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.220 qpair failed and we were unable to recover it. 00:38:07.220 [2024-12-13 03:49:08.338368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.220 [2024-12-13 03:49:08.338382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.220 qpair failed and we were unable to recover it. 00:38:07.220 [2024-12-13 03:49:08.338460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.220 [2024-12-13 03:49:08.338472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.220 qpair failed and we were unable to recover it. 00:38:07.220 [2024-12-13 03:49:08.338618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.220 [2024-12-13 03:49:08.338630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.220 qpair failed and we were unable to recover it. 00:38:07.220 [2024-12-13 03:49:08.338716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.220 [2024-12-13 03:49:08.338729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.220 qpair failed and we were unable to recover it. 00:38:07.220 [2024-12-13 03:49:08.338804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.220 [2024-12-13 03:49:08.338816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.220 qpair failed and we were unable to recover it. 00:38:07.220 [2024-12-13 03:49:08.338966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.220 [2024-12-13 03:49:08.338982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.220 qpair failed and we were unable to recover it. 00:38:07.220 [2024-12-13 03:49:08.339062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.220 [2024-12-13 03:49:08.339075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.220 qpair failed and we were unable to recover it. 00:38:07.220 [2024-12-13 03:49:08.339137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.220 [2024-12-13 03:49:08.339150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.220 qpair failed and we were unable to recover it. 00:38:07.220 [2024-12-13 03:49:08.339236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.220 [2024-12-13 03:49:08.339250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.220 qpair failed and we were unable to recover it. 00:38:07.220 [2024-12-13 03:49:08.339328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.220 [2024-12-13 03:49:08.339342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.220 qpair failed and we were unable to recover it. 00:38:07.220 [2024-12-13 03:49:08.339424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.220 [2024-12-13 03:49:08.339440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.220 qpair failed and we were unable to recover it. 00:38:07.220 [2024-12-13 03:49:08.339615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.339630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.221 [2024-12-13 03:49:08.339713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.339728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.221 [2024-12-13 03:49:08.339866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.339881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.221 [2024-12-13 03:49:08.339964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.339978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.221 [2024-12-13 03:49:08.340118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.340131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.221 [2024-12-13 03:49:08.340284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.340296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.221 [2024-12-13 03:49:08.340442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.340456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.221 [2024-12-13 03:49:08.340535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.340547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.221 [2024-12-13 03:49:08.340625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.340638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.221 [2024-12-13 03:49:08.340712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.340725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.221 [2024-12-13 03:49:08.340805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.340818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.221 [2024-12-13 03:49:08.340888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.340900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.221 [2024-12-13 03:49:08.341004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.341017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.221 [2024-12-13 03:49:08.341097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.341111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.221 [2024-12-13 03:49:08.341189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.341203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.221 [2024-12-13 03:49:08.341334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.341348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.221 [2024-12-13 03:49:08.341418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.341431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.221 [2024-12-13 03:49:08.341506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.341519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.221 [2024-12-13 03:49:08.341664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.341677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.221 [2024-12-13 03:49:08.341774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.341786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.221 [2024-12-13 03:49:08.341883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.341914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.221 [2024-12-13 03:49:08.342030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.342067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.221 [2024-12-13 03:49:08.342250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.342296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.221 [2024-12-13 03:49:08.342445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.342467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.221 [2024-12-13 03:49:08.342540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.342552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.221 [2024-12-13 03:49:08.342629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.342643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.221 [2024-12-13 03:49:08.342731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.342744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.221 [2024-12-13 03:49:08.342817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.342830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.221 [2024-12-13 03:49:08.342902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.342914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.221 [2024-12-13 03:49:08.342995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.343007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.221 [2024-12-13 03:49:08.343158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.343171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.221 [2024-12-13 03:49:08.343241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.343254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.221 [2024-12-13 03:49:08.343326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.343338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.221 [2024-12-13 03:49:08.343415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.343428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.221 [2024-12-13 03:49:08.343497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.343512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.221 [2024-12-13 03:49:08.343581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.343592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.221 [2024-12-13 03:49:08.343735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.343748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.221 [2024-12-13 03:49:08.343900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.221 [2024-12-13 03:49:08.343913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.221 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.343992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.222 [2024-12-13 03:49:08.344004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.222 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.344071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.222 [2024-12-13 03:49:08.344083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.222 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.344175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.222 [2024-12-13 03:49:08.344191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.222 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.344284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.222 [2024-12-13 03:49:08.344300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.222 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.344455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.222 [2024-12-13 03:49:08.344470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.222 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.344553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.222 [2024-12-13 03:49:08.344565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.222 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.344630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.222 [2024-12-13 03:49:08.344642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.222 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.344709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.222 [2024-12-13 03:49:08.344722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.222 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.344823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.222 [2024-12-13 03:49:08.344836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.222 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.344903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.222 [2024-12-13 03:49:08.344915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.222 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.345030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.222 [2024-12-13 03:49:08.345044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.222 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.345182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.222 [2024-12-13 03:49:08.345201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.222 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.345285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.222 [2024-12-13 03:49:08.345298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.222 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.345432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.222 [2024-12-13 03:49:08.345445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.222 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.345591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.222 [2024-12-13 03:49:08.345604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.222 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.345763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.222 [2024-12-13 03:49:08.345782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.222 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.345929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.222 [2024-12-13 03:49:08.345942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.222 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.346020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.222 [2024-12-13 03:49:08.346033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.222 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.346129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.222 [2024-12-13 03:49:08.346142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.222 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.346380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.222 [2024-12-13 03:49:08.346393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.222 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.346470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.222 [2024-12-13 03:49:08.346486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.222 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.346642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.222 [2024-12-13 03:49:08.346659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.222 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.346799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.222 [2024-12-13 03:49:08.346812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.222 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.346967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.222 [2024-12-13 03:49:08.346980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.222 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.347052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.222 [2024-12-13 03:49:08.347064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.222 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.347128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.222 [2024-12-13 03:49:08.347139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.222 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.347209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.222 [2024-12-13 03:49:08.347221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.222 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.347376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.222 [2024-12-13 03:49:08.347390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.222 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.347479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.222 [2024-12-13 03:49:08.347493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.222 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.347636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.222 [2024-12-13 03:49:08.347649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.222 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.347799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.222 [2024-12-13 03:49:08.347811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.222 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.347961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.222 [2024-12-13 03:49:08.347975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.222 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.348055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.222 [2024-12-13 03:49:08.348068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.222 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.348154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.222 [2024-12-13 03:49:08.348166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.222 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.348421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.222 [2024-12-13 03:49:08.348435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.222 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.348616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.222 [2024-12-13 03:49:08.348630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.222 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.348708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.222 [2024-12-13 03:49:08.348720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.222 qpair failed and we were unable to recover it. 00:38:07.222 [2024-12-13 03:49:08.348807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.223 [2024-12-13 03:49:08.348821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.223 qpair failed and we were unable to recover it. 00:38:07.223 [2024-12-13 03:49:08.348915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.223 [2024-12-13 03:49:08.348938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.223 qpair failed and we were unable to recover it. 00:38:07.223 [2024-12-13 03:49:08.349014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.223 [2024-12-13 03:49:08.349026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.223 qpair failed and we were unable to recover it. 00:38:07.223 [2024-12-13 03:49:08.349108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.223 [2024-12-13 03:49:08.349122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.503 qpair failed and we were unable to recover it. 00:38:07.503 [2024-12-13 03:49:08.349269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.503 [2024-12-13 03:49:08.349283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.503 qpair failed and we were unable to recover it. 00:38:07.503 [2024-12-13 03:49:08.349406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.503 [2024-12-13 03:49:08.349434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.503 qpair failed and we were unable to recover it. 00:38:07.503 [2024-12-13 03:49:08.349555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.503 [2024-12-13 03:49:08.349581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.503 qpair failed and we were unable to recover it. 00:38:07.503 [2024-12-13 03:49:08.349678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.503 [2024-12-13 03:49:08.349708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.503 qpair failed and we were unable to recover it. 00:38:07.503 [2024-12-13 03:49:08.349865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.503 [2024-12-13 03:49:08.349880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.503 qpair failed and we were unable to recover it. 00:38:07.503 [2024-12-13 03:49:08.350048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.503 [2024-12-13 03:49:08.350062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.503 qpair failed and we were unable to recover it. 00:38:07.503 [2024-12-13 03:49:08.350152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.503 [2024-12-13 03:49:08.350165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.503 qpair failed and we were unable to recover it. 00:38:07.503 [2024-12-13 03:49:08.350246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.503 [2024-12-13 03:49:08.350259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.503 qpair failed and we were unable to recover it. 00:38:07.503 [2024-12-13 03:49:08.350416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.503 [2024-12-13 03:49:08.350429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.503 qpair failed and we were unable to recover it. 00:38:07.503 [2024-12-13 03:49:08.350498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.503 [2024-12-13 03:49:08.350515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.503 qpair failed and we were unable to recover it. 00:38:07.503 [2024-12-13 03:49:08.350606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.503 [2024-12-13 03:49:08.350618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.503 qpair failed and we were unable to recover it. 00:38:07.503 [2024-12-13 03:49:08.350691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.503 [2024-12-13 03:49:08.350706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.503 qpair failed and we were unable to recover it. 00:38:07.503 [2024-12-13 03:49:08.350837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.503 [2024-12-13 03:49:08.350851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.503 qpair failed and we were unable to recover it. 00:38:07.503 [2024-12-13 03:49:08.350933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.503 [2024-12-13 03:49:08.350946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.503 qpair failed and we were unable to recover it. 00:38:07.503 [2024-12-13 03:49:08.351090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.503 [2024-12-13 03:49:08.351105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.503 qpair failed and we were unable to recover it. 00:38:07.503 [2024-12-13 03:49:08.351252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.503 [2024-12-13 03:49:08.351267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.503 qpair failed and we were unable to recover it. 00:38:07.503 [2024-12-13 03:49:08.351430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.503 [2024-12-13 03:49:08.351443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.503 qpair failed and we were unable to recover it. 00:38:07.503 [2024-12-13 03:49:08.351527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.503 [2024-12-13 03:49:08.351540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.503 qpair failed and we were unable to recover it. 00:38:07.503 [2024-12-13 03:49:08.351685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.503 [2024-12-13 03:49:08.351699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.503 qpair failed and we were unable to recover it. 00:38:07.503 [2024-12-13 03:49:08.351853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.503 [2024-12-13 03:49:08.351866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.503 qpair failed and we were unable to recover it. 00:38:07.503 [2024-12-13 03:49:08.352038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.503 [2024-12-13 03:49:08.352053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.503 qpair failed and we were unable to recover it. 00:38:07.503 [2024-12-13 03:49:08.352142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.503 [2024-12-13 03:49:08.352156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.503 qpair failed and we were unable to recover it. 00:38:07.503 [2024-12-13 03:49:08.352240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.503 [2024-12-13 03:49:08.352254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.503 qpair failed and we were unable to recover it. 00:38:07.503 [2024-12-13 03:49:08.352403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.503 [2024-12-13 03:49:08.352419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.503 qpair failed and we were unable to recover it. 00:38:07.503 [2024-12-13 03:49:08.352515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.503 [2024-12-13 03:49:08.352528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.503 qpair failed and we were unable to recover it. 00:38:07.503 [2024-12-13 03:49:08.352611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.503 [2024-12-13 03:49:08.352624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.503 qpair failed and we were unable to recover it. 00:38:07.503 [2024-12-13 03:49:08.352690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.503 [2024-12-13 03:49:08.352703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.503 qpair failed and we were unable to recover it. 00:38:07.503 [2024-12-13 03:49:08.352795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.503 [2024-12-13 03:49:08.352809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.504 qpair failed and we were unable to recover it. 00:38:07.504 [2024-12-13 03:49:08.352961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.504 [2024-12-13 03:49:08.352977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.504 qpair failed and we were unable to recover it. 00:38:07.504 [2024-12-13 03:49:08.353064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.504 [2024-12-13 03:49:08.353077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.504 qpair failed and we were unable to recover it. 00:38:07.504 [2024-12-13 03:49:08.353166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.504 [2024-12-13 03:49:08.353180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.504 qpair failed and we were unable to recover it. 00:38:07.504 [2024-12-13 03:49:08.353248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.504 [2024-12-13 03:49:08.353260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.504 qpair failed and we were unable to recover it. 00:38:07.504 [2024-12-13 03:49:08.353343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.504 [2024-12-13 03:49:08.353355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.504 qpair failed and we were unable to recover it. 00:38:07.504 [2024-12-13 03:49:08.353494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.504 [2024-12-13 03:49:08.353507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.504 qpair failed and we were unable to recover it. 00:38:07.504 [2024-12-13 03:49:08.353584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.504 [2024-12-13 03:49:08.353597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.504 qpair failed and we were unable to recover it. 00:38:07.504 [2024-12-13 03:49:08.353703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.504 [2024-12-13 03:49:08.353717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.504 qpair failed and we were unable to recover it. 00:38:07.504 [2024-12-13 03:49:08.353816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.504 [2024-12-13 03:49:08.353830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.504 qpair failed and we were unable to recover it. 00:38:07.504 [2024-12-13 03:49:08.353990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.504 [2024-12-13 03:49:08.354009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.504 qpair failed and we were unable to recover it. 00:38:07.504 [2024-12-13 03:49:08.354094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.504 [2024-12-13 03:49:08.354108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.504 qpair failed and we were unable to recover it. 00:38:07.504 [2024-12-13 03:49:08.354199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.504 [2024-12-13 03:49:08.354212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.504 qpair failed and we were unable to recover it. 00:38:07.504 [2024-12-13 03:49:08.354298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.504 [2024-12-13 03:49:08.354311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.504 qpair failed and we were unable to recover it. 00:38:07.504 [2024-12-13 03:49:08.354516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.504 [2024-12-13 03:49:08.354542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.504 qpair failed and we were unable to recover it. 00:38:07.504 [2024-12-13 03:49:08.354652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.504 [2024-12-13 03:49:08.354676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.504 qpair failed and we were unable to recover it. 00:38:07.504 [2024-12-13 03:49:08.354857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.504 [2024-12-13 03:49:08.354883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.504 qpair failed and we were unable to recover it. 00:38:07.504 [2024-12-13 03:49:08.354981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.504 [2024-12-13 03:49:08.354997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.504 qpair failed and we were unable to recover it. 00:38:07.504 [2024-12-13 03:49:08.355082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.504 [2024-12-13 03:49:08.355095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.504 qpair failed and we were unable to recover it. 00:38:07.504 [2024-12-13 03:49:08.355178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.504 [2024-12-13 03:49:08.355191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.504 qpair failed and we were unable to recover it. 00:38:07.504 [2024-12-13 03:49:08.355278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.504 [2024-12-13 03:49:08.355291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.504 qpair failed and we were unable to recover it. 00:38:07.504 [2024-12-13 03:49:08.355429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.504 [2024-12-13 03:49:08.355442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.504 qpair failed and we were unable to recover it. 00:38:07.504 [2024-12-13 03:49:08.355608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.504 [2024-12-13 03:49:08.355622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.504 qpair failed and we were unable to recover it. 00:38:07.504 [2024-12-13 03:49:08.355754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.504 [2024-12-13 03:49:08.355768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.504 qpair failed and we were unable to recover it. 00:38:07.504 [2024-12-13 03:49:08.355841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.504 [2024-12-13 03:49:08.355854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.504 qpair failed and we were unable to recover it. 00:38:07.504 [2024-12-13 03:49:08.355985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.504 [2024-12-13 03:49:08.355999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.504 qpair failed and we were unable to recover it. 00:38:07.504 [2024-12-13 03:49:08.356072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.504 [2024-12-13 03:49:08.356084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.504 qpair failed and we were unable to recover it. 00:38:07.504 [2024-12-13 03:49:08.356154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.504 [2024-12-13 03:49:08.356170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.504 qpair failed and we were unable to recover it. 00:38:07.504 [2024-12-13 03:49:08.356255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.504 [2024-12-13 03:49:08.356268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.504 qpair failed and we were unable to recover it. 00:38:07.504 [2024-12-13 03:49:08.356405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.504 [2024-12-13 03:49:08.356418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.504 qpair failed and we were unable to recover it. 00:38:07.504 [2024-12-13 03:49:08.356592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.504 [2024-12-13 03:49:08.356606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.504 qpair failed and we were unable to recover it. 00:38:07.504 [2024-12-13 03:49:08.356684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.504 [2024-12-13 03:49:08.356697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.504 qpair failed and we were unable to recover it. 00:38:07.504 [2024-12-13 03:49:08.356765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.504 [2024-12-13 03:49:08.356778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.504 qpair failed and we were unable to recover it. 00:38:07.504 [2024-12-13 03:49:08.356911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.504 [2024-12-13 03:49:08.356945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.504 qpair failed and we were unable to recover it. 00:38:07.504 [2024-12-13 03:49:08.357030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.504 [2024-12-13 03:49:08.357043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.504 qpair failed and we were unable to recover it. 00:38:07.505 [2024-12-13 03:49:08.357120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.505 [2024-12-13 03:49:08.357132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.505 qpair failed and we were unable to recover it. 00:38:07.505 [2024-12-13 03:49:08.357337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.505 [2024-12-13 03:49:08.357351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.505 qpair failed and we were unable to recover it. 00:38:07.505 [2024-12-13 03:49:08.357512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.505 [2024-12-13 03:49:08.357526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.505 qpair failed and we were unable to recover it. 00:38:07.505 [2024-12-13 03:49:08.357661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.505 [2024-12-13 03:49:08.357675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.505 qpair failed and we were unable to recover it. 00:38:07.505 [2024-12-13 03:49:08.357808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.505 [2024-12-13 03:49:08.357822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.505 qpair failed and we were unable to recover it. 00:38:07.505 [2024-12-13 03:49:08.357885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.505 [2024-12-13 03:49:08.357898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.505 qpair failed and we were unable to recover it. 00:38:07.505 [2024-12-13 03:49:08.358085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.505 [2024-12-13 03:49:08.358100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.505 qpair failed and we were unable to recover it. 00:38:07.505 [2024-12-13 03:49:08.358193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.505 [2024-12-13 03:49:08.358206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.505 qpair failed and we were unable to recover it. 00:38:07.505 [2024-12-13 03:49:08.358283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.505 [2024-12-13 03:49:08.358296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.505 qpair failed and we were unable to recover it. 00:38:07.505 [2024-12-13 03:49:08.358367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.505 [2024-12-13 03:49:08.358379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.505 qpair failed and we were unable to recover it. 00:38:07.505 [2024-12-13 03:49:08.358468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.505 [2024-12-13 03:49:08.358481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.505 qpair failed and we were unable to recover it. 00:38:07.505 [2024-12-13 03:49:08.358581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.505 [2024-12-13 03:49:08.358594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.505 qpair failed and we were unable to recover it. 00:38:07.505 [2024-12-13 03:49:08.358761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.505 [2024-12-13 03:49:08.358775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.505 qpair failed and we were unable to recover it. 00:38:07.505 [2024-12-13 03:49:08.358860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.505 [2024-12-13 03:49:08.358885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.505 qpair failed and we were unable to recover it. 00:38:07.505 [2024-12-13 03:49:08.358973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.505 [2024-12-13 03:49:08.358988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.505 qpair failed and we were unable to recover it. 00:38:07.505 [2024-12-13 03:49:08.359146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.505 [2024-12-13 03:49:08.359160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.505 qpair failed and we were unable to recover it. 00:38:07.505 [2024-12-13 03:49:08.359304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.505 [2024-12-13 03:49:08.359317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.505 qpair failed and we were unable to recover it. 00:38:07.505 [2024-12-13 03:49:08.359393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.505 [2024-12-13 03:49:08.359407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.505 qpair failed and we were unable to recover it. 00:38:07.505 [2024-12-13 03:49:08.359484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.505 [2024-12-13 03:49:08.359497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.505 qpair failed and we were unable to recover it. 00:38:07.505 [2024-12-13 03:49:08.359616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.505 [2024-12-13 03:49:08.359641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.505 qpair failed and we were unable to recover it. 00:38:07.505 [2024-12-13 03:49:08.359744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.505 [2024-12-13 03:49:08.359774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.505 qpair failed and we were unable to recover it. 00:38:07.505 [2024-12-13 03:49:08.359975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.505 [2024-12-13 03:49:08.360003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.505 qpair failed and we were unable to recover it. 00:38:07.505 [2024-12-13 03:49:08.360096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.505 [2024-12-13 03:49:08.360114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.505 qpair failed and we were unable to recover it. 00:38:07.505 [2024-12-13 03:49:08.360252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.505 [2024-12-13 03:49:08.360265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.505 qpair failed and we were unable to recover it. 00:38:07.505 [2024-12-13 03:49:08.360415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.505 [2024-12-13 03:49:08.360430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.505 qpair failed and we were unable to recover it. 00:38:07.505 [2024-12-13 03:49:08.360508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.505 [2024-12-13 03:49:08.360521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.505 qpair failed and we were unable to recover it. 00:38:07.505 [2024-12-13 03:49:08.360589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.505 [2024-12-13 03:49:08.360602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.505 qpair failed and we were unable to recover it. 00:38:07.505 [2024-12-13 03:49:08.360695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.505 [2024-12-13 03:49:08.360708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.505 qpair failed and we were unable to recover it. 00:38:07.505 [2024-12-13 03:49:08.360857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.505 [2024-12-13 03:49:08.360872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.505 qpair failed and we were unable to recover it. 00:38:07.505 [2024-12-13 03:49:08.360962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.505 [2024-12-13 03:49:08.360976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.505 qpair failed and we were unable to recover it. 00:38:07.505 [2024-12-13 03:49:08.361056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.505 [2024-12-13 03:49:08.361070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.505 qpair failed and we were unable to recover it. 00:38:07.505 [2024-12-13 03:49:08.361210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.505 [2024-12-13 03:49:08.361224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.505 qpair failed and we were unable to recover it. 00:38:07.505 [2024-12-13 03:49:08.361321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.505 [2024-12-13 03:49:08.361340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.505 qpair failed and we were unable to recover it. 00:38:07.505 [2024-12-13 03:49:08.361437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.505 [2024-12-13 03:49:08.361450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.505 qpair failed and we were unable to recover it. 00:38:07.506 [2024-12-13 03:49:08.361524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.506 [2024-12-13 03:49:08.361538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.506 qpair failed and we were unable to recover it. 00:38:07.506 [2024-12-13 03:49:08.361629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.506 [2024-12-13 03:49:08.361642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.506 qpair failed and we were unable to recover it. 00:38:07.506 [2024-12-13 03:49:08.361713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.506 [2024-12-13 03:49:08.361726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.506 qpair failed and we were unable to recover it. 00:38:07.506 [2024-12-13 03:49:08.361824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.506 [2024-12-13 03:49:08.361839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.506 qpair failed and we were unable to recover it. 00:38:07.506 [2024-12-13 03:49:08.361933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.506 [2024-12-13 03:49:08.361946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.506 qpair failed and we were unable to recover it. 00:38:07.506 [2024-12-13 03:49:08.362015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.506 [2024-12-13 03:49:08.362029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.506 qpair failed and we were unable to recover it. 00:38:07.506 [2024-12-13 03:49:08.362116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.506 [2024-12-13 03:49:08.362130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.506 qpair failed and we were unable to recover it. 00:38:07.506 [2024-12-13 03:49:08.362291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.506 [2024-12-13 03:49:08.362304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.506 qpair failed and we were unable to recover it. 00:38:07.506 [2024-12-13 03:49:08.362443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.506 [2024-12-13 03:49:08.362457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.506 qpair failed and we were unable to recover it. 00:38:07.506 [2024-12-13 03:49:08.362604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.506 [2024-12-13 03:49:08.362618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.506 qpair failed and we were unable to recover it. 00:38:07.506 [2024-12-13 03:49:08.362696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.506 [2024-12-13 03:49:08.362710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.506 qpair failed and we were unable to recover it. 00:38:07.506 [2024-12-13 03:49:08.362781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.506 [2024-12-13 03:49:08.362799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.506 qpair failed and we were unable to recover it. 00:38:07.506 [2024-12-13 03:49:08.362872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.506 [2024-12-13 03:49:08.362886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.506 qpair failed and we were unable to recover it. 00:38:07.506 [2024-12-13 03:49:08.363034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.506 [2024-12-13 03:49:08.363049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.506 qpair failed and we were unable to recover it. 00:38:07.506 [2024-12-13 03:49:08.363257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.506 [2024-12-13 03:49:08.363272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.506 qpair failed and we were unable to recover it. 00:38:07.506 [2024-12-13 03:49:08.363355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.506 [2024-12-13 03:49:08.363368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.506 qpair failed and we were unable to recover it. 00:38:07.506 [2024-12-13 03:49:08.363453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.506 [2024-12-13 03:49:08.363466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.506 qpair failed and we were unable to recover it. 00:38:07.506 [2024-12-13 03:49:08.363612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.506 [2024-12-13 03:49:08.363625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.506 qpair failed and we were unable to recover it. 00:38:07.506 [2024-12-13 03:49:08.363700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.506 [2024-12-13 03:49:08.363713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.506 qpair failed and we were unable to recover it. 00:38:07.506 [2024-12-13 03:49:08.363781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.506 [2024-12-13 03:49:08.363794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.506 qpair failed and we were unable to recover it. 00:38:07.506 [2024-12-13 03:49:08.363872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.506 [2024-12-13 03:49:08.363885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.506 qpair failed and we were unable to recover it. 00:38:07.506 [2024-12-13 03:49:08.363973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.506 [2024-12-13 03:49:08.363987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.506 qpair failed and we were unable to recover it. 00:38:07.506 [2024-12-13 03:49:08.364132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.506 [2024-12-13 03:49:08.364146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.506 qpair failed and we were unable to recover it. 00:38:07.506 [2024-12-13 03:49:08.364219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.506 [2024-12-13 03:49:08.364241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.506 qpair failed and we were unable to recover it. 00:38:07.506 [2024-12-13 03:49:08.364390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.506 [2024-12-13 03:49:08.364404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.506 qpair failed and we were unable to recover it. 00:38:07.506 [2024-12-13 03:49:08.364487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.506 [2024-12-13 03:49:08.364512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.506 qpair failed and we were unable to recover it. 00:38:07.506 [2024-12-13 03:49:08.364611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.506 [2024-12-13 03:49:08.364635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.506 qpair failed and we were unable to recover it. 00:38:07.506 [2024-12-13 03:49:08.364806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.506 [2024-12-13 03:49:08.364830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.506 qpair failed and we were unable to recover it. 00:38:07.506 [2024-12-13 03:49:08.364972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.506 [2024-12-13 03:49:08.364988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.506 qpair failed and we were unable to recover it. 00:38:07.506 [2024-12-13 03:49:08.365071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.506 [2024-12-13 03:49:08.365085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.506 qpair failed and we were unable to recover it. 00:38:07.506 [2024-12-13 03:49:08.365160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.506 [2024-12-13 03:49:08.365174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.506 qpair failed and we were unable to recover it. 00:38:07.506 [2024-12-13 03:49:08.365333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.506 [2024-12-13 03:49:08.365347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.506 qpair failed and we were unable to recover it. 00:38:07.506 [2024-12-13 03:49:08.365445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.506 [2024-12-13 03:49:08.365461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.506 qpair failed and we were unable to recover it. 00:38:07.506 [2024-12-13 03:49:08.365543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.506 [2024-12-13 03:49:08.365556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.506 qpair failed and we were unable to recover it. 00:38:07.506 [2024-12-13 03:49:08.365632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.507 [2024-12-13 03:49:08.365646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.507 qpair failed and we were unable to recover it. 00:38:07.507 [2024-12-13 03:49:08.365730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.507 [2024-12-13 03:49:08.365743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.507 qpair failed and we were unable to recover it. 00:38:07.507 [2024-12-13 03:49:08.365883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.507 [2024-12-13 03:49:08.365896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.507 qpair failed and we were unable to recover it. 00:38:07.507 [2024-12-13 03:49:08.366035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.507 [2024-12-13 03:49:08.366048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.507 qpair failed and we were unable to recover it. 00:38:07.507 [2024-12-13 03:49:08.366140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.507 [2024-12-13 03:49:08.366156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.507 qpair failed and we were unable to recover it. 00:38:07.507 [2024-12-13 03:49:08.366241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.507 [2024-12-13 03:49:08.366254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.507 qpair failed and we were unable to recover it. 00:38:07.507 [2024-12-13 03:49:08.366319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.507 [2024-12-13 03:49:08.366330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.507 qpair failed and we were unable to recover it. 00:38:07.507 [2024-12-13 03:49:08.366420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.507 [2024-12-13 03:49:08.366434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.507 qpair failed and we were unable to recover it. 00:38:07.507 [2024-12-13 03:49:08.366590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.507 [2024-12-13 03:49:08.366604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.507 qpair failed and we were unable to recover it. 00:38:07.507 [2024-12-13 03:49:08.366685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.507 [2024-12-13 03:49:08.366700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.507 qpair failed and we were unable to recover it. 00:38:07.507 [2024-12-13 03:49:08.366777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.507 [2024-12-13 03:49:08.366789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.507 qpair failed and we were unable to recover it. 00:38:07.507 [2024-12-13 03:49:08.366860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.507 [2024-12-13 03:49:08.366875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.507 qpair failed and we were unable to recover it. 00:38:07.507 [2024-12-13 03:49:08.366941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.507 [2024-12-13 03:49:08.366955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.507 qpair failed and we were unable to recover it. 00:38:07.507 [2024-12-13 03:49:08.367096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.507 [2024-12-13 03:49:08.367111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.507 qpair failed and we were unable to recover it. 00:38:07.507 [2024-12-13 03:49:08.367188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.507 [2024-12-13 03:49:08.367202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.507 qpair failed and we were unable to recover it. 00:38:07.507 [2024-12-13 03:49:08.367269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.507 [2024-12-13 03:49:08.367281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.507 qpair failed and we were unable to recover it. 00:38:07.507 [2024-12-13 03:49:08.367359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.507 [2024-12-13 03:49:08.367375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.507 qpair failed and we were unable to recover it. 00:38:07.507 [2024-12-13 03:49:08.367509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.507 [2024-12-13 03:49:08.367523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.507 qpair failed and we were unable to recover it. 00:38:07.507 [2024-12-13 03:49:08.367597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.507 [2024-12-13 03:49:08.367611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.507 qpair failed and we were unable to recover it. 00:38:07.507 [2024-12-13 03:49:08.367695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.507 [2024-12-13 03:49:08.367709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.507 qpair failed and we were unable to recover it. 00:38:07.507 [2024-12-13 03:49:08.367850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.507 [2024-12-13 03:49:08.367864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.507 qpair failed and we were unable to recover it. 00:38:07.507 [2024-12-13 03:49:08.368036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.507 [2024-12-13 03:49:08.368050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.507 qpair failed and we were unable to recover it. 00:38:07.507 [2024-12-13 03:49:08.368186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.507 [2024-12-13 03:49:08.368198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.507 qpair failed and we were unable to recover it. 00:38:07.507 [2024-12-13 03:49:08.368278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.507 [2024-12-13 03:49:08.368291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.507 qpair failed and we were unable to recover it. 00:38:07.507 [2024-12-13 03:49:08.368428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.507 [2024-12-13 03:49:08.368441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.507 qpair failed and we were unable to recover it. 00:38:07.507 [2024-12-13 03:49:08.368508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.507 [2024-12-13 03:49:08.368521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.507 qpair failed and we were unable to recover it. 00:38:07.507 [2024-12-13 03:49:08.368611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.507 [2024-12-13 03:49:08.368625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.507 [2024-12-13 03:49:08.368622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:07.507 qpair failed and we were unable to recover it. 00:38:07.507 [2024-12-13 03:49:08.368708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.507 [2024-12-13 03:49:08.368732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.507 qpair failed and we were unable to recover it. 00:38:07.507 [2024-12-13 03:49:08.368903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.507 [2024-12-13 03:49:08.368935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.507 qpair failed and we were unable to recover it. 00:38:07.507 [2024-12-13 03:49:08.369039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.507 [2024-12-13 03:49:08.369062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.507 qpair failed and we were unable to recover it. 00:38:07.507 [2024-12-13 03:49:08.369149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.507 [2024-12-13 03:49:08.369165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.507 qpair failed and we were unable to recover it. 00:38:07.507 [2024-12-13 03:49:08.369302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.507 [2024-12-13 03:49:08.369316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.507 qpair failed and we were unable to recover it. 00:38:07.507 [2024-12-13 03:49:08.369402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.507 [2024-12-13 03:49:08.369415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.507 qpair failed and we were unable to recover it. 00:38:07.507 [2024-12-13 03:49:08.369485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.507 [2024-12-13 03:49:08.369497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.507 qpair failed and we were unable to recover it. 00:38:07.507 [2024-12-13 03:49:08.369587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.507 [2024-12-13 03:49:08.369601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.508 qpair failed and we were unable to recover it. 00:38:07.508 [2024-12-13 03:49:08.369747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.508 [2024-12-13 03:49:08.369760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.508 qpair failed and we were unable to recover it. 00:38:07.508 [2024-12-13 03:49:08.369992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.508 [2024-12-13 03:49:08.370007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.508 qpair failed and we were unable to recover it. 00:38:07.508 [2024-12-13 03:49:08.370169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.508 [2024-12-13 03:49:08.370183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.508 qpair failed and we were unable to recover it. 00:38:07.508 [2024-12-13 03:49:08.370265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.508 [2024-12-13 03:49:08.370278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.508 qpair failed and we were unable to recover it. 00:38:07.508 [2024-12-13 03:49:08.370362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.508 [2024-12-13 03:49:08.370375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.508 qpair failed and we were unable to recover it. 00:38:07.508 [2024-12-13 03:49:08.370538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.508 [2024-12-13 03:49:08.370551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.508 qpair failed and we were unable to recover it. 00:38:07.508 [2024-12-13 03:49:08.370631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.508 [2024-12-13 03:49:08.370644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.508 qpair failed and we were unable to recover it. 00:38:07.508 [2024-12-13 03:49:08.370720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.508 [2024-12-13 03:49:08.370734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.508 qpair failed and we were unable to recover it. 00:38:07.508 [2024-12-13 03:49:08.370806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.508 [2024-12-13 03:49:08.370819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.508 qpair failed and we were unable to recover it. 00:38:07.508 [2024-12-13 03:49:08.370878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.508 [2024-12-13 03:49:08.370895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.508 qpair failed and we were unable to recover it. 00:38:07.508 [2024-12-13 03:49:08.371058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.508 [2024-12-13 03:49:08.371072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.508 qpair failed and we were unable to recover it. 00:38:07.508 [2024-12-13 03:49:08.371184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.508 [2024-12-13 03:49:08.371200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.508 qpair failed and we were unable to recover it. 00:38:07.508 [2024-12-13 03:49:08.371343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.508 [2024-12-13 03:49:08.371360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.508 qpair failed and we were unable to recover it. 00:38:07.508 [2024-12-13 03:49:08.371495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.508 [2024-12-13 03:49:08.371509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.508 qpair failed and we were unable to recover it. 00:38:07.508 [2024-12-13 03:49:08.371658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.508 [2024-12-13 03:49:08.371671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.508 qpair failed and we were unable to recover it. 00:38:07.508 [2024-12-13 03:49:08.371767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.508 [2024-12-13 03:49:08.371780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.508 qpair failed and we were unable to recover it. 00:38:07.508 [2024-12-13 03:49:08.371849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.508 [2024-12-13 03:49:08.371863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.508 qpair failed and we were unable to recover it. 00:38:07.508 [2024-12-13 03:49:08.372001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.508 [2024-12-13 03:49:08.372015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.508 qpair failed and we were unable to recover it. 00:38:07.508 [2024-12-13 03:49:08.372085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.508 [2024-12-13 03:49:08.372099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.508 qpair failed and we were unable to recover it. 00:38:07.508 [2024-12-13 03:49:08.372192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.508 [2024-12-13 03:49:08.372205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.508 qpair failed and we were unable to recover it. 00:38:07.508 [2024-12-13 03:49:08.372348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.508 [2024-12-13 03:49:08.372362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.508 qpair failed and we were unable to recover it. 00:38:07.508 [2024-12-13 03:49:08.372511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.508 [2024-12-13 03:49:08.372525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.508 qpair failed and we were unable to recover it. 00:38:07.508 [2024-12-13 03:49:08.372686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.508 [2024-12-13 03:49:08.372699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.508 qpair failed and we were unable to recover it. 00:38:07.508 [2024-12-13 03:49:08.372776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.508 [2024-12-13 03:49:08.372797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.508 qpair failed and we were unable to recover it. 00:38:07.508 [2024-12-13 03:49:08.372934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.508 [2024-12-13 03:49:08.372949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.508 qpair failed and we were unable to recover it. 00:38:07.508 [2024-12-13 03:49:08.373107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.508 [2024-12-13 03:49:08.373121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.508 qpair failed and we were unable to recover it. 00:38:07.508 [2024-12-13 03:49:08.373200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.508 [2024-12-13 03:49:08.373213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.508 qpair failed and we were unable to recover it. 00:38:07.508 [2024-12-13 03:49:08.373350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.508 [2024-12-13 03:49:08.373364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.508 qpair failed and we were unable to recover it. 00:38:07.508 [2024-12-13 03:49:08.373508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.508 [2024-12-13 03:49:08.373521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.508 qpair failed and we were unable to recover it. 00:38:07.509 [2024-12-13 03:49:08.373654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.509 [2024-12-13 03:49:08.373668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.509 qpair failed and we were unable to recover it. 00:38:07.509 [2024-12-13 03:49:08.373748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.509 [2024-12-13 03:49:08.373761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.509 qpair failed and we were unable to recover it. 00:38:07.509 [2024-12-13 03:49:08.373907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.509 [2024-12-13 03:49:08.373927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.509 qpair failed and we were unable to recover it. 00:38:07.509 [2024-12-13 03:49:08.374013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.509 [2024-12-13 03:49:08.374028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.509 qpair failed and we were unable to recover it. 00:38:07.509 [2024-12-13 03:49:08.374098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.509 [2024-12-13 03:49:08.374113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.509 qpair failed and we were unable to recover it. 00:38:07.509 [2024-12-13 03:49:08.374207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.509 [2024-12-13 03:49:08.374221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.509 qpair failed and we were unable to recover it. 00:38:07.509 [2024-12-13 03:49:08.374290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.509 [2024-12-13 03:49:08.374303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.509 qpair failed and we were unable to recover it. 00:38:07.509 [2024-12-13 03:49:08.374451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.509 [2024-12-13 03:49:08.374465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.509 qpair failed and we were unable to recover it. 00:38:07.509 [2024-12-13 03:49:08.374602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.509 [2024-12-13 03:49:08.374616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.509 qpair failed and we were unable to recover it. 00:38:07.509 [2024-12-13 03:49:08.374696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.509 [2024-12-13 03:49:08.374709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.509 qpair failed and we were unable to recover it. 00:38:07.509 [2024-12-13 03:49:08.374858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.509 [2024-12-13 03:49:08.374871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.509 qpair failed and we were unable to recover it. 00:38:07.509 [2024-12-13 03:49:08.375006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.509 [2024-12-13 03:49:08.375020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.509 qpair failed and we were unable to recover it. 00:38:07.509 [2024-12-13 03:49:08.375157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.509 [2024-12-13 03:49:08.375170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.509 qpair failed and we were unable to recover it. 00:38:07.509 [2024-12-13 03:49:08.375312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.509 [2024-12-13 03:49:08.375325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.509 qpair failed and we were unable to recover it. 00:38:07.509 [2024-12-13 03:49:08.375405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.509 [2024-12-13 03:49:08.375419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.509 qpair failed and we were unable to recover it. 00:38:07.509 [2024-12-13 03:49:08.375490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.509 [2024-12-13 03:49:08.375503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.509 qpair failed and we were unable to recover it. 00:38:07.509 [2024-12-13 03:49:08.375569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.509 [2024-12-13 03:49:08.375582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.509 qpair failed and we were unable to recover it. 00:38:07.509 [2024-12-13 03:49:08.375662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.509 [2024-12-13 03:49:08.375675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.509 qpair failed and we were unable to recover it. 00:38:07.509 [2024-12-13 03:49:08.375810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.509 [2024-12-13 03:49:08.375823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.509 qpair failed and we were unable to recover it. 00:38:07.509 [2024-12-13 03:49:08.375974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.509 [2024-12-13 03:49:08.375989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.509 qpair failed and we were unable to recover it. 00:38:07.509 [2024-12-13 03:49:08.376077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.509 [2024-12-13 03:49:08.376093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.509 qpair failed and we were unable to recover it. 00:38:07.509 [2024-12-13 03:49:08.376245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.509 [2024-12-13 03:49:08.376260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.509 qpair failed and we were unable to recover it. 00:38:07.509 [2024-12-13 03:49:08.376387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.509 [2024-12-13 03:49:08.376401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.509 qpair failed and we were unable to recover it. 00:38:07.509 [2024-12-13 03:49:08.376481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.509 [2024-12-13 03:49:08.376494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.509 qpair failed and we were unable to recover it. 00:38:07.509 [2024-12-13 03:49:08.376572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.509 [2024-12-13 03:49:08.376586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.509 qpair failed and we were unable to recover it. 00:38:07.509 [2024-12-13 03:49:08.376737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.509 [2024-12-13 03:49:08.376751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.509 qpair failed and we were unable to recover it. 00:38:07.509 [2024-12-13 03:49:08.376902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.509 [2024-12-13 03:49:08.376916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.509 qpair failed and we were unable to recover it. 00:38:07.509 [2024-12-13 03:49:08.377069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.509 [2024-12-13 03:49:08.377083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.509 qpair failed and we were unable to recover it. 00:38:07.509 [2024-12-13 03:49:08.377169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.509 [2024-12-13 03:49:08.377183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.509 qpair failed and we were unable to recover it. 00:38:07.509 [2024-12-13 03:49:08.377252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.509 [2024-12-13 03:49:08.377265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.509 qpair failed and we were unable to recover it. 00:38:07.509 [2024-12-13 03:49:08.377363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.509 [2024-12-13 03:49:08.377377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.509 qpair failed and we were unable to recover it. 00:38:07.509 [2024-12-13 03:49:08.377514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.509 [2024-12-13 03:49:08.377527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.509 qpair failed and we were unable to recover it. 00:38:07.509 [2024-12-13 03:49:08.377660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.509 [2024-12-13 03:49:08.377673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.509 qpair failed and we were unable to recover it. 00:38:07.509 [2024-12-13 03:49:08.377752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.509 [2024-12-13 03:49:08.377765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.509 qpair failed and we were unable to recover it. 00:38:07.509 [2024-12-13 03:49:08.377849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.510 [2024-12-13 03:49:08.377863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.510 qpair failed and we were unable to recover it. 00:38:07.510 [2024-12-13 03:49:08.377948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.510 [2024-12-13 03:49:08.377962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.510 qpair failed and we were unable to recover it. 00:38:07.510 [2024-12-13 03:49:08.378062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.510 [2024-12-13 03:49:08.378075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.510 qpair failed and we were unable to recover it. 00:38:07.510 [2024-12-13 03:49:08.378213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.510 [2024-12-13 03:49:08.378227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.510 qpair failed and we were unable to recover it. 00:38:07.510 [2024-12-13 03:49:08.378433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.510 [2024-12-13 03:49:08.378447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.510 qpair failed and we were unable to recover it. 00:38:07.510 [2024-12-13 03:49:08.378525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.510 [2024-12-13 03:49:08.378538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.510 qpair failed and we were unable to recover it. 00:38:07.510 [2024-12-13 03:49:08.378678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.510 [2024-12-13 03:49:08.378691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.510 qpair failed and we were unable to recover it. 00:38:07.510 [2024-12-13 03:49:08.378768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.510 [2024-12-13 03:49:08.378781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.510 qpair failed and we were unable to recover it. 00:38:07.510 [2024-12-13 03:49:08.378860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.510 [2024-12-13 03:49:08.378873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.510 qpair failed and we were unable to recover it. 00:38:07.510 [2024-12-13 03:49:08.378970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.510 [2024-12-13 03:49:08.378984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.510 qpair failed and we were unable to recover it. 00:38:07.510 [2024-12-13 03:49:08.379121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.510 [2024-12-13 03:49:08.379135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.510 qpair failed and we were unable to recover it. 00:38:07.510 [2024-12-13 03:49:08.379224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.510 [2024-12-13 03:49:08.379237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.510 qpair failed and we were unable to recover it. 00:38:07.510 [2024-12-13 03:49:08.379375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.510 [2024-12-13 03:49:08.379389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.510 qpair failed and we were unable to recover it. 00:38:07.510 [2024-12-13 03:49:08.379462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.510 [2024-12-13 03:49:08.379480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.510 qpair failed and we were unable to recover it. 00:38:07.510 [2024-12-13 03:49:08.379549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.510 [2024-12-13 03:49:08.379562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.510 qpair failed and we were unable to recover it. 00:38:07.510 [2024-12-13 03:49:08.379634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.510 [2024-12-13 03:49:08.379648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.510 qpair failed and we were unable to recover it. 00:38:07.510 [2024-12-13 03:49:08.379796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.510 [2024-12-13 03:49:08.379810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.510 qpair failed and we were unable to recover it. 00:38:07.510 [2024-12-13 03:49:08.379882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.510 [2024-12-13 03:49:08.379896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.510 qpair failed and we were unable to recover it. 00:38:07.510 [2024-12-13 03:49:08.380086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.510 [2024-12-13 03:49:08.380100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.510 qpair failed and we were unable to recover it. 00:38:07.510 [2024-12-13 03:49:08.380247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.510 [2024-12-13 03:49:08.380261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.510 qpair failed and we were unable to recover it. 00:38:07.510 [2024-12-13 03:49:08.380355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.510 [2024-12-13 03:49:08.380369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.510 qpair failed and we were unable to recover it. 00:38:07.510 [2024-12-13 03:49:08.380438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.510 [2024-12-13 03:49:08.380451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.510 qpair failed and we were unable to recover it. 00:38:07.510 [2024-12-13 03:49:08.380593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.510 [2024-12-13 03:49:08.380606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.510 qpair failed and we were unable to recover it. 00:38:07.510 [2024-12-13 03:49:08.380680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.510 [2024-12-13 03:49:08.380695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.510 qpair failed and we were unable to recover it. 00:38:07.510 [2024-12-13 03:49:08.380779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.510 [2024-12-13 03:49:08.380792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.510 qpair failed and we were unable to recover it. 00:38:07.510 [2024-12-13 03:49:08.380881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.510 [2024-12-13 03:49:08.380895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.510 qpair failed and we were unable to recover it. 00:38:07.510 [2024-12-13 03:49:08.380990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.510 [2024-12-13 03:49:08.381009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.510 qpair failed and we were unable to recover it. 00:38:07.510 [2024-12-13 03:49:08.381111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.510 [2024-12-13 03:49:08.381126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.510 qpair failed and we were unable to recover it. 00:38:07.510 [2024-12-13 03:49:08.381207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.510 [2024-12-13 03:49:08.381221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.510 qpair failed and we were unable to recover it. 00:38:07.510 [2024-12-13 03:49:08.381366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.510 [2024-12-13 03:49:08.381380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.510 qpair failed and we were unable to recover it. 00:38:07.510 [2024-12-13 03:49:08.381454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.510 [2024-12-13 03:49:08.381468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.510 qpair failed and we were unable to recover it. 00:38:07.510 [2024-12-13 03:49:08.381606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.510 [2024-12-13 03:49:08.381619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.510 qpair failed and we were unable to recover it. 00:38:07.510 [2024-12-13 03:49:08.381766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.510 [2024-12-13 03:49:08.381780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.510 qpair failed and we were unable to recover it. 00:38:07.510 [2024-12-13 03:49:08.381987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.510 [2024-12-13 03:49:08.382005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.510 qpair failed and we were unable to recover it. 00:38:07.510 [2024-12-13 03:49:08.382088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.510 [2024-12-13 03:49:08.382102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.511 qpair failed and we were unable to recover it. 00:38:07.511 [2024-12-13 03:49:08.382186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.511 [2024-12-13 03:49:08.382201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.511 qpair failed and we were unable to recover it. 00:38:07.511 [2024-12-13 03:49:08.382282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.511 [2024-12-13 03:49:08.382297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.511 qpair failed and we were unable to recover it. 00:38:07.511 [2024-12-13 03:49:08.382511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.511 [2024-12-13 03:49:08.382525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.511 qpair failed and we were unable to recover it. 00:38:07.511 [2024-12-13 03:49:08.382606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.511 [2024-12-13 03:49:08.382620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.511 qpair failed and we were unable to recover it. 00:38:07.511 [2024-12-13 03:49:08.382755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.511 [2024-12-13 03:49:08.382769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.511 qpair failed and we were unable to recover it. 00:38:07.511 [2024-12-13 03:49:08.382925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.511 [2024-12-13 03:49:08.382942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.511 qpair failed and we were unable to recover it. 00:38:07.511 [2024-12-13 03:49:08.383078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.511 [2024-12-13 03:49:08.383092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.511 qpair failed and we were unable to recover it. 00:38:07.511 [2024-12-13 03:49:08.383179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.511 [2024-12-13 03:49:08.383195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.511 qpair failed and we were unable to recover it. 00:38:07.511 [2024-12-13 03:49:08.383261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.511 [2024-12-13 03:49:08.383273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.511 qpair failed and we were unable to recover it. 00:38:07.511 [2024-12-13 03:49:08.383345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.511 [2024-12-13 03:49:08.383357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.511 qpair failed and we were unable to recover it. 00:38:07.511 [2024-12-13 03:49:08.383420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.511 [2024-12-13 03:49:08.383432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.511 qpair failed and we were unable to recover it. 00:38:07.511 [2024-12-13 03:49:08.383511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.511 [2024-12-13 03:49:08.383525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.511 qpair failed and we were unable to recover it. 00:38:07.511 [2024-12-13 03:49:08.383677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.511 [2024-12-13 03:49:08.383691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.511 qpair failed and we were unable to recover it. 00:38:07.511 [2024-12-13 03:49:08.383841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.511 [2024-12-13 03:49:08.383860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.511 qpair failed and we were unable to recover it. 00:38:07.511 [2024-12-13 03:49:08.383929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.511 [2024-12-13 03:49:08.383943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.511 qpair failed and we were unable to recover it. 00:38:07.511 [2024-12-13 03:49:08.384033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.511 [2024-12-13 03:49:08.384048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.511 qpair failed and we were unable to recover it. 00:38:07.511 [2024-12-13 03:49:08.384150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.511 [2024-12-13 03:49:08.384164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.511 qpair failed and we were unable to recover it. 00:38:07.511 [2024-12-13 03:49:08.384238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.511 [2024-12-13 03:49:08.384252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.511 qpair failed and we were unable to recover it. 00:38:07.511 [2024-12-13 03:49:08.384336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.511 [2024-12-13 03:49:08.384349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.511 qpair failed and we were unable to recover it. 00:38:07.511 [2024-12-13 03:49:08.384444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.511 [2024-12-13 03:49:08.384458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.511 qpair failed and we were unable to recover it. 00:38:07.511 [2024-12-13 03:49:08.384533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.511 [2024-12-13 03:49:08.384547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.511 qpair failed and we were unable to recover it. 00:38:07.511 [2024-12-13 03:49:08.384623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.511 [2024-12-13 03:49:08.384638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.511 qpair failed and we were unable to recover it. 00:38:07.511 [2024-12-13 03:49:08.384729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.511 [2024-12-13 03:49:08.384746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.511 qpair failed and we were unable to recover it. 00:38:07.511 [2024-12-13 03:49:08.384891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.511 [2024-12-13 03:49:08.384906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.511 qpair failed and we were unable to recover it. 00:38:07.511 [2024-12-13 03:49:08.385062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.511 [2024-12-13 03:49:08.385078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.511 qpair failed and we were unable to recover it. 00:38:07.511 [2024-12-13 03:49:08.385241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.511 [2024-12-13 03:49:08.385257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.511 qpair failed and we were unable to recover it. 00:38:07.511 [2024-12-13 03:49:08.385337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.511 [2024-12-13 03:49:08.385350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.511 qpair failed and we were unable to recover it. 00:38:07.511 [2024-12-13 03:49:08.385426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.511 [2024-12-13 03:49:08.385441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.511 qpair failed and we were unable to recover it. 00:38:07.511 [2024-12-13 03:49:08.385580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.511 [2024-12-13 03:49:08.385594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.511 qpair failed and we were unable to recover it. 00:38:07.511 [2024-12-13 03:49:08.385673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.511 [2024-12-13 03:49:08.385687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.511 qpair failed and we were unable to recover it. 00:38:07.511 [2024-12-13 03:49:08.385765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.511 [2024-12-13 03:49:08.385781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.511 qpair failed and we were unable to recover it. 00:38:07.511 [2024-12-13 03:49:08.385860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.511 [2024-12-13 03:49:08.385878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.511 qpair failed and we were unable to recover it. 00:38:07.511 [2024-12-13 03:49:08.386020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.511 [2024-12-13 03:49:08.386035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.511 qpair failed and we were unable to recover it. 00:38:07.511 [2024-12-13 03:49:08.386141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.512 [2024-12-13 03:49:08.386155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.512 qpair failed and we were unable to recover it. 00:38:07.512 [2024-12-13 03:49:08.386295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.512 [2024-12-13 03:49:08.386309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.512 qpair failed and we were unable to recover it. 00:38:07.512 [2024-12-13 03:49:08.386448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.512 [2024-12-13 03:49:08.386461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.512 qpair failed and we were unable to recover it. 00:38:07.512 [2024-12-13 03:49:08.386533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.512 [2024-12-13 03:49:08.386548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.512 qpair failed and we were unable to recover it. 00:38:07.512 [2024-12-13 03:49:08.386731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.512 [2024-12-13 03:49:08.386746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.512 qpair failed and we were unable to recover it. 00:38:07.512 [2024-12-13 03:49:08.386828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.512 [2024-12-13 03:49:08.386845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.512 qpair failed and we were unable to recover it. 00:38:07.512 [2024-12-13 03:49:08.387000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.512 [2024-12-13 03:49:08.387016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.512 qpair failed and we were unable to recover it. 00:38:07.512 [2024-12-13 03:49:08.387114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.512 [2024-12-13 03:49:08.387128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.512 qpair failed and we were unable to recover it. 00:38:07.512 [2024-12-13 03:49:08.387334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.512 [2024-12-13 03:49:08.387348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.512 qpair failed and we were unable to recover it. 00:38:07.512 [2024-12-13 03:49:08.387431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.512 [2024-12-13 03:49:08.387449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.512 qpair failed and we were unable to recover it. 00:38:07.512 [2024-12-13 03:49:08.387527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.512 [2024-12-13 03:49:08.387539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.512 qpair failed and we were unable to recover it. 00:38:07.512 [2024-12-13 03:49:08.387627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.512 [2024-12-13 03:49:08.387640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.512 qpair failed and we were unable to recover it. 00:38:07.512 [2024-12-13 03:49:08.387709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.512 [2024-12-13 03:49:08.387723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.512 qpair failed and we were unable to recover it. 00:38:07.512 [2024-12-13 03:49:08.387797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.512 [2024-12-13 03:49:08.387811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.512 qpair failed and we were unable to recover it. 00:38:07.512 [2024-12-13 03:49:08.387949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.512 [2024-12-13 03:49:08.387963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.512 qpair failed and we were unable to recover it. 00:38:07.512 [2024-12-13 03:49:08.388101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.512 [2024-12-13 03:49:08.388116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.512 qpair failed and we were unable to recover it. 00:38:07.512 [2024-12-13 03:49:08.388192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.512 [2024-12-13 03:49:08.388204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.512 qpair failed and we were unable to recover it. 00:38:07.512 [2024-12-13 03:49:08.388268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.512 [2024-12-13 03:49:08.388281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.512 qpair failed and we were unable to recover it. 00:38:07.512 [2024-12-13 03:49:08.388424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.512 [2024-12-13 03:49:08.388438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.512 qpair failed and we were unable to recover it. 00:38:07.512 [2024-12-13 03:49:08.388522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.512 [2024-12-13 03:49:08.388536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.512 qpair failed and we were unable to recover it. 00:38:07.512 [2024-12-13 03:49:08.388611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.512 [2024-12-13 03:49:08.388625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.512 qpair failed and we were unable to recover it. 00:38:07.512 [2024-12-13 03:49:08.388778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.512 [2024-12-13 03:49:08.388792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.512 qpair failed and we were unable to recover it. 00:38:07.512 [2024-12-13 03:49:08.388879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.512 [2024-12-13 03:49:08.388893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.512 qpair failed and we were unable to recover it. 00:38:07.512 [2024-12-13 03:49:08.389042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.512 [2024-12-13 03:49:08.389056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.512 qpair failed and we were unable to recover it. 00:38:07.512 [2024-12-13 03:49:08.389196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.512 [2024-12-13 03:49:08.389210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.512 qpair failed and we were unable to recover it. 00:38:07.512 [2024-12-13 03:49:08.389288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.512 [2024-12-13 03:49:08.389301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.512 qpair failed and we were unable to recover it. 00:38:07.512 [2024-12-13 03:49:08.389435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.512 [2024-12-13 03:49:08.389449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.512 qpair failed and we were unable to recover it. 00:38:07.512 [2024-12-13 03:49:08.389599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.512 [2024-12-13 03:49:08.389613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.512 qpair failed and we were unable to recover it. 00:38:07.512 [2024-12-13 03:49:08.389680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.512 [2024-12-13 03:49:08.389694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.512 qpair failed and we were unable to recover it. 00:38:07.512 [2024-12-13 03:49:08.389859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.512 [2024-12-13 03:49:08.389874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.512 qpair failed and we were unable to recover it. 00:38:07.512 [2024-12-13 03:49:08.390020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.512 [2024-12-13 03:49:08.390034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.512 qpair failed and we were unable to recover it. 00:38:07.512 [2024-12-13 03:49:08.390118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.512 [2024-12-13 03:49:08.390132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.512 qpair failed and we were unable to recover it. 00:38:07.512 [2024-12-13 03:49:08.390337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.512 [2024-12-13 03:49:08.390353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.512 qpair failed and we were unable to recover it. 00:38:07.512 [2024-12-13 03:49:08.390491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.512 [2024-12-13 03:49:08.390504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.512 qpair failed and we were unable to recover it. 00:38:07.512 [2024-12-13 03:49:08.390603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.512 [2024-12-13 03:49:08.390616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.512 qpair failed and we were unable to recover it. 00:38:07.513 [2024-12-13 03:49:08.390702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.513 [2024-12-13 03:49:08.390716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.513 qpair failed and we were unable to recover it. 00:38:07.513 [2024-12-13 03:49:08.390783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.513 [2024-12-13 03:49:08.390797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.513 qpair failed and we were unable to recover it. 00:38:07.513 [2024-12-13 03:49:08.390930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.513 [2024-12-13 03:49:08.390945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.513 qpair failed and we were unable to recover it. 00:38:07.513 [2024-12-13 03:49:08.391027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.513 [2024-12-13 03:49:08.391042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.513 qpair failed and we were unable to recover it. 00:38:07.513 [2024-12-13 03:49:08.391117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.513 [2024-12-13 03:49:08.391131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.513 qpair failed and we were unable to recover it. 00:38:07.513 [2024-12-13 03:49:08.391204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.513 [2024-12-13 03:49:08.391217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.513 qpair failed and we were unable to recover it. 00:38:07.513 [2024-12-13 03:49:08.391369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.513 [2024-12-13 03:49:08.391382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.513 qpair failed and we were unable to recover it. 00:38:07.513 [2024-12-13 03:49:08.391452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.513 [2024-12-13 03:49:08.391466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.513 qpair failed and we were unable to recover it. 00:38:07.513 [2024-12-13 03:49:08.391548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.513 [2024-12-13 03:49:08.391561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.513 qpair failed and we were unable to recover it. 00:38:07.513 [2024-12-13 03:49:08.391746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.513 [2024-12-13 03:49:08.391759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.513 qpair failed and we were unable to recover it. 00:38:07.513 [2024-12-13 03:49:08.391830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.513 [2024-12-13 03:49:08.391843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.513 qpair failed and we were unable to recover it. 00:38:07.513 [2024-12-13 03:49:08.391931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.513 [2024-12-13 03:49:08.391945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.513 qpair failed and we were unable to recover it. 00:38:07.513 [2024-12-13 03:49:08.392006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.513 [2024-12-13 03:49:08.392018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.513 qpair failed and we were unable to recover it. 00:38:07.513 [2024-12-13 03:49:08.392090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.513 [2024-12-13 03:49:08.392104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.513 qpair failed and we were unable to recover it. 00:38:07.513 [2024-12-13 03:49:08.392180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.513 [2024-12-13 03:49:08.392194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.513 qpair failed and we were unable to recover it. 00:38:07.513 [2024-12-13 03:49:08.392325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.513 [2024-12-13 03:49:08.392339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.513 qpair failed and we were unable to recover it. 00:38:07.513 [2024-12-13 03:49:08.392438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.513 [2024-12-13 03:49:08.392455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.513 qpair failed and we were unable to recover it. 00:38:07.513 [2024-12-13 03:49:08.392614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.513 [2024-12-13 03:49:08.392628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.513 qpair failed and we were unable to recover it. 00:38:07.513 [2024-12-13 03:49:08.392713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.513 [2024-12-13 03:49:08.392726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.513 qpair failed and we were unable to recover it. 00:38:07.513 [2024-12-13 03:49:08.392816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.513 [2024-12-13 03:49:08.392829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.513 qpair failed and we were unable to recover it. 00:38:07.513 [2024-12-13 03:49:08.392903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.513 [2024-12-13 03:49:08.392922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.513 qpair failed and we were unable to recover it. 00:38:07.513 [2024-12-13 03:49:08.392988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.513 [2024-12-13 03:49:08.393002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.513 qpair failed and we were unable to recover it. 00:38:07.513 [2024-12-13 03:49:08.393141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.513 [2024-12-13 03:49:08.393155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.513 qpair failed and we were unable to recover it. 00:38:07.513 [2024-12-13 03:49:08.393228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.513 [2024-12-13 03:49:08.393241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.513 qpair failed and we were unable to recover it. 00:38:07.513 [2024-12-13 03:49:08.393348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.513 [2024-12-13 03:49:08.393361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.513 qpair failed and we were unable to recover it. 00:38:07.513 [2024-12-13 03:49:08.393453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.513 [2024-12-13 03:49:08.393467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.513 qpair failed and we were unable to recover it. 00:38:07.513 [2024-12-13 03:49:08.393681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.513 [2024-12-13 03:49:08.393694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.513 qpair failed and we were unable to recover it. 00:38:07.513 [2024-12-13 03:49:08.393829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.513 [2024-12-13 03:49:08.393843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.513 qpair failed and we were unable to recover it. 00:38:07.513 [2024-12-13 03:49:08.393978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.513 [2024-12-13 03:49:08.393994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.513 qpair failed and we were unable to recover it. 00:38:07.513 [2024-12-13 03:49:08.394072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.513 [2024-12-13 03:49:08.394085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.513 qpair failed and we were unable to recover it. 00:38:07.514 [2024-12-13 03:49:08.394159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.514 [2024-12-13 03:49:08.394173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.514 qpair failed and we were unable to recover it. 00:38:07.514 [2024-12-13 03:49:08.394246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.514 [2024-12-13 03:49:08.394259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.514 qpair failed and we were unable to recover it. 00:38:07.514 [2024-12-13 03:49:08.394346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.514 [2024-12-13 03:49:08.394359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.514 qpair failed and we were unable to recover it. 00:38:07.514 [2024-12-13 03:49:08.394491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.514 [2024-12-13 03:49:08.394504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.514 qpair failed and we were unable to recover it. 00:38:07.514 [2024-12-13 03:49:08.394654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.514 [2024-12-13 03:49:08.394669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.514 qpair failed and we were unable to recover it. 00:38:07.514 [2024-12-13 03:49:08.394737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.514 [2024-12-13 03:49:08.394750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.514 qpair failed and we were unable to recover it. 00:38:07.514 [2024-12-13 03:49:08.394831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.514 [2024-12-13 03:49:08.394845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.514 qpair failed and we were unable to recover it. 00:38:07.514 [2024-12-13 03:49:08.394999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.514 [2024-12-13 03:49:08.395016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.514 qpair failed and we were unable to recover it. 00:38:07.514 [2024-12-13 03:49:08.395095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.514 [2024-12-13 03:49:08.395134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.514 qpair failed and we were unable to recover it. 00:38:07.514 [2024-12-13 03:49:08.395279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.514 [2024-12-13 03:49:08.395293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.514 qpair failed and we were unable to recover it. 00:38:07.514 [2024-12-13 03:49:08.395429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.514 [2024-12-13 03:49:08.395442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.514 qpair failed and we were unable to recover it. 00:38:07.514 [2024-12-13 03:49:08.395643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.514 [2024-12-13 03:49:08.395657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.514 qpair failed and we were unable to recover it. 00:38:07.514 [2024-12-13 03:49:08.395726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.514 [2024-12-13 03:49:08.395739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.514 qpair failed and we were unable to recover it. 00:38:07.514 [2024-12-13 03:49:08.395882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.514 [2024-12-13 03:49:08.395898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.514 qpair failed and we were unable to recover it. 00:38:07.514 [2024-12-13 03:49:08.396044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.514 [2024-12-13 03:49:08.396058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.514 qpair failed and we were unable to recover it. 00:38:07.514 [2024-12-13 03:49:08.396126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.514 [2024-12-13 03:49:08.396140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.514 qpair failed and we were unable to recover it. 00:38:07.514 [2024-12-13 03:49:08.396286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.514 [2024-12-13 03:49:08.396299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.514 qpair failed and we were unable to recover it. 00:38:07.514 [2024-12-13 03:49:08.396391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.514 [2024-12-13 03:49:08.396405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.514 qpair failed and we were unable to recover it. 00:38:07.514 [2024-12-13 03:49:08.396485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.514 [2024-12-13 03:49:08.396499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.514 qpair failed and we were unable to recover it. 00:38:07.514 [2024-12-13 03:49:08.396578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.514 [2024-12-13 03:49:08.396591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.514 qpair failed and we were unable to recover it. 00:38:07.514 [2024-12-13 03:49:08.396736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.514 [2024-12-13 03:49:08.396751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.514 qpair failed and we were unable to recover it. 00:38:07.514 [2024-12-13 03:49:08.396836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.514 [2024-12-13 03:49:08.396849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.514 qpair failed and we were unable to recover it. 00:38:07.514 [2024-12-13 03:49:08.396915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.514 [2024-12-13 03:49:08.396942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.514 qpair failed and we were unable to recover it. 00:38:07.514 [2024-12-13 03:49:08.397091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.514 [2024-12-13 03:49:08.397104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.514 qpair failed and we were unable to recover it. 00:38:07.514 [2024-12-13 03:49:08.397178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.514 [2024-12-13 03:49:08.397192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.514 qpair failed and we were unable to recover it. 00:38:07.514 [2024-12-13 03:49:08.397273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.514 [2024-12-13 03:49:08.397286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.514 qpair failed and we were unable to recover it. 00:38:07.514 [2024-12-13 03:49:08.397371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.514 [2024-12-13 03:49:08.397384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.514 qpair failed and we were unable to recover it. 00:38:07.514 [2024-12-13 03:49:08.397473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.514 [2024-12-13 03:49:08.397487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.514 qpair failed and we were unable to recover it. 00:38:07.514 [2024-12-13 03:49:08.397575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.514 [2024-12-13 03:49:08.397590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.514 qpair failed and we were unable to recover it. 00:38:07.514 [2024-12-13 03:49:08.397674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.514 [2024-12-13 03:49:08.397687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.514 qpair failed and we were unable to recover it. 00:38:07.514 [2024-12-13 03:49:08.397770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.514 [2024-12-13 03:49:08.397785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.514 qpair failed and we were unable to recover it. 00:38:07.514 [2024-12-13 03:49:08.397950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.514 [2024-12-13 03:49:08.397965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.514 qpair failed and we were unable to recover it. 00:38:07.514 [2024-12-13 03:49:08.398103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.514 [2024-12-13 03:49:08.398117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.514 qpair failed and we were unable to recover it. 00:38:07.514 [2024-12-13 03:49:08.398210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.514 [2024-12-13 03:49:08.398223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.514 qpair failed and we were unable to recover it. 00:38:07.514 [2024-12-13 03:49:08.398293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.514 [2024-12-13 03:49:08.398307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.514 qpair failed and we were unable to recover it. 00:38:07.515 [2024-12-13 03:49:08.398375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.515 [2024-12-13 03:49:08.398388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.515 qpair failed and we were unable to recover it. 00:38:07.515 [2024-12-13 03:49:08.398468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.515 [2024-12-13 03:49:08.398480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.515 qpair failed and we were unable to recover it. 00:38:07.515 [2024-12-13 03:49:08.398569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.515 [2024-12-13 03:49:08.398582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.515 qpair failed and we were unable to recover it. 00:38:07.515 [2024-12-13 03:49:08.398730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.515 [2024-12-13 03:49:08.398743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.515 qpair failed and we were unable to recover it. 00:38:07.515 [2024-12-13 03:49:08.398826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.515 [2024-12-13 03:49:08.398839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.515 qpair failed and we were unable to recover it. 00:38:07.515 [2024-12-13 03:49:08.398911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.515 [2024-12-13 03:49:08.398931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.515 qpair failed and we were unable to recover it. 00:38:07.515 [2024-12-13 03:49:08.399059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.515 [2024-12-13 03:49:08.399073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.515 qpair failed and we were unable to recover it. 00:38:07.515 [2024-12-13 03:49:08.399157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.515 [2024-12-13 03:49:08.399170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.515 qpair failed and we were unable to recover it. 00:38:07.515 [2024-12-13 03:49:08.399255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.515 [2024-12-13 03:49:08.399271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.515 qpair failed and we were unable to recover it. 00:38:07.515 [2024-12-13 03:49:08.399362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.515 [2024-12-13 03:49:08.399375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.515 qpair failed and we were unable to recover it. 00:38:07.515 [2024-12-13 03:49:08.399442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.515 [2024-12-13 03:49:08.399455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.515 qpair failed and we were unable to recover it. 00:38:07.515 [2024-12-13 03:49:08.399533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.515 [2024-12-13 03:49:08.399545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.515 qpair failed and we were unable to recover it. 00:38:07.515 [2024-12-13 03:49:08.399619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.515 [2024-12-13 03:49:08.399632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.515 qpair failed and we were unable to recover it. 00:38:07.515 [2024-12-13 03:49:08.399801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.515 [2024-12-13 03:49:08.399815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.515 qpair failed and we were unable to recover it. 00:38:07.515 [2024-12-13 03:49:08.399909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.515 [2024-12-13 03:49:08.399927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.515 qpair failed and we were unable to recover it. 00:38:07.515 [2024-12-13 03:49:08.400206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.515 [2024-12-13 03:49:08.400220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.515 qpair failed and we were unable to recover it. 00:38:07.515 [2024-12-13 03:49:08.400306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.515 [2024-12-13 03:49:08.400319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.515 qpair failed and we were unable to recover it. 00:38:07.515 [2024-12-13 03:49:08.400467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.515 [2024-12-13 03:49:08.400481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.515 qpair failed and we were unable to recover it. 00:38:07.515 [2024-12-13 03:49:08.400543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.515 [2024-12-13 03:49:08.400558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.515 qpair failed and we were unable to recover it. 00:38:07.515 [2024-12-13 03:49:08.400635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.515 [2024-12-13 03:49:08.400649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.515 qpair failed and we were unable to recover it. 00:38:07.515 [2024-12-13 03:49:08.400787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.515 [2024-12-13 03:49:08.400800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.515 qpair failed and we were unable to recover it. 00:38:07.515 [2024-12-13 03:49:08.400892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.515 [2024-12-13 03:49:08.400905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.515 qpair failed and we were unable to recover it. 00:38:07.515 [2024-12-13 03:49:08.401122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.515 [2024-12-13 03:49:08.401136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.515 qpair failed and we were unable to recover it. 00:38:07.515 [2024-12-13 03:49:08.401212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.515 [2024-12-13 03:49:08.401225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.515 qpair failed and we were unable to recover it. 00:38:07.515 [2024-12-13 03:49:08.401290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.515 [2024-12-13 03:49:08.401303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.515 qpair failed and we were unable to recover it. 00:38:07.515 [2024-12-13 03:49:08.401380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.515 [2024-12-13 03:49:08.401394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.515 qpair failed and we were unable to recover it. 00:38:07.515 [2024-12-13 03:49:08.401466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.515 [2024-12-13 03:49:08.401478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.515 qpair failed and we were unable to recover it. 00:38:07.515 [2024-12-13 03:49:08.401543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.515 [2024-12-13 03:49:08.401556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.515 qpair failed and we were unable to recover it. 00:38:07.515 [2024-12-13 03:49:08.401698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.515 [2024-12-13 03:49:08.401712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.515 qpair failed and we were unable to recover it. 00:38:07.515 [2024-12-13 03:49:08.401790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.515 [2024-12-13 03:49:08.401803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.515 qpair failed and we were unable to recover it. 00:38:07.515 [2024-12-13 03:49:08.401876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.515 [2024-12-13 03:49:08.401890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.515 qpair failed and we were unable to recover it. 00:38:07.515 [2024-12-13 03:49:08.401965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.515 [2024-12-13 03:49:08.401980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.515 qpair failed and we were unable to recover it. 00:38:07.515 [2024-12-13 03:49:08.402086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.515 [2024-12-13 03:49:08.402099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.515 qpair failed and we were unable to recover it. 00:38:07.515 [2024-12-13 03:49:08.402253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.515 [2024-12-13 03:49:08.402268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.515 qpair failed and we were unable to recover it. 00:38:07.515 [2024-12-13 03:49:08.402469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.515 [2024-12-13 03:49:08.402481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.516 qpair failed and we were unable to recover it. 00:38:07.516 [2024-12-13 03:49:08.402652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.516 [2024-12-13 03:49:08.402666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.516 qpair failed and we were unable to recover it. 00:38:07.516 [2024-12-13 03:49:08.402748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.516 [2024-12-13 03:49:08.402763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.516 qpair failed and we were unable to recover it. 00:38:07.516 [2024-12-13 03:49:08.402899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.516 [2024-12-13 03:49:08.402930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.516 qpair failed and we were unable to recover it. 00:38:07.516 [2024-12-13 03:49:08.403087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.516 [2024-12-13 03:49:08.403101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.516 qpair failed and we were unable to recover it. 00:38:07.516 [2024-12-13 03:49:08.403186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.516 [2024-12-13 03:49:08.403200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.516 qpair failed and we were unable to recover it. 00:38:07.516 [2024-12-13 03:49:08.403340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.516 [2024-12-13 03:49:08.403353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.516 qpair failed and we were unable to recover it. 00:38:07.516 [2024-12-13 03:49:08.403503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.516 [2024-12-13 03:49:08.403517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.516 qpair failed and we were unable to recover it. 00:38:07.516 [2024-12-13 03:49:08.403581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.516 [2024-12-13 03:49:08.403594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.516 qpair failed and we were unable to recover it. 00:38:07.516 [2024-12-13 03:49:08.403753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.516 [2024-12-13 03:49:08.403766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.516 qpair failed and we were unable to recover it. 00:38:07.516 [2024-12-13 03:49:08.403933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.516 [2024-12-13 03:49:08.403948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.516 qpair failed and we were unable to recover it. 00:38:07.516 [2024-12-13 03:49:08.404155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.516 [2024-12-13 03:49:08.404189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.516 qpair failed and we were unable to recover it. 00:38:07.516 [2024-12-13 03:49:08.404300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.516 [2024-12-13 03:49:08.404333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.516 qpair failed and we were unable to recover it. 00:38:07.516 [2024-12-13 03:49:08.404512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.516 [2024-12-13 03:49:08.404544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.516 qpair failed and we were unable to recover it. 00:38:07.516 [2024-12-13 03:49:08.404702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.516 [2024-12-13 03:49:08.404718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.516 qpair failed and we were unable to recover it. 00:38:07.516 [2024-12-13 03:49:08.404866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.516 [2024-12-13 03:49:08.404879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.516 qpair failed and we were unable to recover it. 00:38:07.516 [2024-12-13 03:49:08.405026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.516 [2024-12-13 03:49:08.405040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.516 qpair failed and we were unable to recover it. 00:38:07.516 [2024-12-13 03:49:08.405187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.516 [2024-12-13 03:49:08.405200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.516 qpair failed and we were unable to recover it. 00:38:07.516 [2024-12-13 03:49:08.405274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.516 [2024-12-13 03:49:08.405287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.516 qpair failed and we were unable to recover it. 00:38:07.516 [2024-12-13 03:49:08.405377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.516 [2024-12-13 03:49:08.405391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.516 qpair failed and we were unable to recover it. 00:38:07.516 [2024-12-13 03:49:08.405471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.516 [2024-12-13 03:49:08.405485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.516 qpair failed and we were unable to recover it. 00:38:07.516 [2024-12-13 03:49:08.405559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.516 [2024-12-13 03:49:08.405573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.516 qpair failed and we were unable to recover it. 00:38:07.516 [2024-12-13 03:49:08.405797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.516 [2024-12-13 03:49:08.405811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.516 qpair failed and we were unable to recover it. 00:38:07.516 [2024-12-13 03:49:08.405964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.516 [2024-12-13 03:49:08.405979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.516 qpair failed and we were unable to recover it. 00:38:07.516 [2024-12-13 03:49:08.406046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.516 [2024-12-13 03:49:08.406065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.516 qpair failed and we were unable to recover it. 00:38:07.516 [2024-12-13 03:49:08.406149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.516 [2024-12-13 03:49:08.406163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.516 qpair failed and we were unable to recover it. 00:38:07.516 [2024-12-13 03:49:08.406257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.516 [2024-12-13 03:49:08.406271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.516 qpair failed and we were unable to recover it. 00:38:07.516 [2024-12-13 03:49:08.406413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.516 [2024-12-13 03:49:08.406427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.516 qpair failed and we were unable to recover it. 00:38:07.516 [2024-12-13 03:49:08.406578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.516 [2024-12-13 03:49:08.406591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.516 qpair failed and we were unable to recover it. 00:38:07.516 [2024-12-13 03:49:08.406734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.516 [2024-12-13 03:49:08.406747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.516 qpair failed and we were unable to recover it. 00:38:07.516 [2024-12-13 03:49:08.406947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.516 [2024-12-13 03:49:08.406961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.516 qpair failed and we were unable to recover it. 00:38:07.516 [2024-12-13 03:49:08.407047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.516 [2024-12-13 03:49:08.407060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.516 qpair failed and we were unable to recover it. 00:38:07.516 [2024-12-13 03:49:08.407226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.516 [2024-12-13 03:49:08.407239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.516 qpair failed and we were unable to recover it. 00:38:07.516 [2024-12-13 03:49:08.407467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.516 [2024-12-13 03:49:08.407481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.516 qpair failed and we were unable to recover it. 00:38:07.516 [2024-12-13 03:49:08.407578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.516 [2024-12-13 03:49:08.407592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.516 qpair failed and we were unable to recover it. 00:38:07.516 [2024-12-13 03:49:08.407809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.517 [2024-12-13 03:49:08.407823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.517 qpair failed and we were unable to recover it. 00:38:07.517 [2024-12-13 03:49:08.407993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.517 [2024-12-13 03:49:08.408007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.517 qpair failed and we were unable to recover it. 00:38:07.517 [2024-12-13 03:49:08.408143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.517 [2024-12-13 03:49:08.408157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.517 qpair failed and we were unable to recover it. 00:38:07.517 [2024-12-13 03:49:08.408249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.517 [2024-12-13 03:49:08.408263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.517 qpair failed and we were unable to recover it. 00:38:07.517 [2024-12-13 03:49:08.408337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.517 [2024-12-13 03:49:08.408350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.517 qpair failed and we were unable to recover it. 00:38:07.517 [2024-12-13 03:49:08.408437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.517 [2024-12-13 03:49:08.408451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.517 qpair failed and we were unable to recover it. 00:38:07.517 [2024-12-13 03:49:08.408587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.517 [2024-12-13 03:49:08.408601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.517 qpair failed and we were unable to recover it. 00:38:07.517 [2024-12-13 03:49:08.408684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.517 [2024-12-13 03:49:08.408698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.517 qpair failed and we were unable to recover it. 00:38:07.517 [2024-12-13 03:49:08.408795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.517 [2024-12-13 03:49:08.408808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.517 qpair failed and we were unable to recover it. 00:38:07.517 [2024-12-13 03:49:08.408942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.517 [2024-12-13 03:49:08.408956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.517 qpair failed and we were unable to recover it. 00:38:07.517 [2024-12-13 03:49:08.409035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.517 [2024-12-13 03:49:08.409048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.517 qpair failed and we were unable to recover it. 00:38:07.517 [2024-12-13 03:49:08.409113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.517 [2024-12-13 03:49:08.409126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.517 qpair failed and we were unable to recover it. 00:38:07.517 [2024-12-13 03:49:08.409205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.517 [2024-12-13 03:49:08.409218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.517 qpair failed and we were unable to recover it. 00:38:07.517 [2024-12-13 03:49:08.409306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.517 [2024-12-13 03:49:08.409320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.517 qpair failed and we were unable to recover it. 00:38:07.517 [2024-12-13 03:49:08.409452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.517 [2024-12-13 03:49:08.409465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.517 qpair failed and we were unable to recover it. 00:38:07.517 [2024-12-13 03:49:08.409616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.517 [2024-12-13 03:49:08.409630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.517 qpair failed and we were unable to recover it. 00:38:07.517 [2024-12-13 03:49:08.409744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.517 [2024-12-13 03:49:08.409770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.517 qpair failed and we were unable to recover it. 00:38:07.517 [2024-12-13 03:49:08.409874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.517 [2024-12-13 03:49:08.409901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.517 qpair failed and we were unable to recover it. 00:38:07.517 [2024-12-13 03:49:08.410153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.517 [2024-12-13 03:49:08.410175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.517 qpair failed and we were unable to recover it. 00:38:07.517 [2024-12-13 03:49:08.410349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.517 [2024-12-13 03:49:08.410371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.517 qpair failed and we were unable to recover it. 00:38:07.517 [2024-12-13 03:49:08.410479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.517 [2024-12-13 03:49:08.410500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.517 qpair failed and we were unable to recover it. 00:38:07.517 [2024-12-13 03:49:08.410654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.517 [2024-12-13 03:49:08.410675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.517 qpair failed and we were unable to recover it. 00:38:07.517 [2024-12-13 03:49:08.410835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.517 [2024-12-13 03:49:08.410851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.517 qpair failed and we were unable to recover it. 00:38:07.517 [2024-12-13 03:49:08.410942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.517 [2024-12-13 03:49:08.410956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.517 qpair failed and we were unable to recover it. 00:38:07.517 [2024-12-13 03:49:08.411055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.517 [2024-12-13 03:49:08.411068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.517 qpair failed and we were unable to recover it. 00:38:07.517 [2024-12-13 03:49:08.411243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.517 [2024-12-13 03:49:08.411258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.517 qpair failed and we were unable to recover it. 00:38:07.517 [2024-12-13 03:49:08.411351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.517 [2024-12-13 03:49:08.411364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.517 qpair failed and we were unable to recover it. 00:38:07.517 [2024-12-13 03:49:08.411515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.517 [2024-12-13 03:49:08.411529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.517 qpair failed and we were unable to recover it. 00:38:07.517 [2024-12-13 03:49:08.411669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.517 [2024-12-13 03:49:08.411683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.517 qpair failed and we were unable to recover it. 00:38:07.517 [2024-12-13 03:49:08.411882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.517 [2024-12-13 03:49:08.411898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.517 qpair failed and we were unable to recover it. 00:38:07.517 [2024-12-13 03:49:08.411980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.517 [2024-12-13 03:49:08.411994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.517 qpair failed and we were unable to recover it. 00:38:07.517 [2024-12-13 03:49:08.412082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.517 [2024-12-13 03:49:08.412096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.517 qpair failed and we were unable to recover it. 00:38:07.517 [2024-12-13 03:49:08.412201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.517 [2024-12-13 03:49:08.412214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.517 qpair failed and we were unable to recover it. 00:38:07.517 [2024-12-13 03:49:08.412294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.517 [2024-12-13 03:49:08.412308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.517 qpair failed and we were unable to recover it. 00:38:07.517 [2024-12-13 03:49:08.412394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.517 [2024-12-13 03:49:08.412407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.517 qpair failed and we were unable to recover it. 00:38:07.517 [2024-12-13 03:49:08.412664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.518 [2024-12-13 03:49:08.412678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.518 qpair failed and we were unable to recover it. 00:38:07.518 [2024-12-13 03:49:08.412834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.518 [2024-12-13 03:49:08.412848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.518 qpair failed and we were unable to recover it. 00:38:07.518 [2024-12-13 03:49:08.412926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.518 [2024-12-13 03:49:08.412940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.518 qpair failed and we were unable to recover it. 00:38:07.518 [2024-12-13 03:49:08.413079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.518 [2024-12-13 03:49:08.413093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.518 qpair failed and we were unable to recover it. 00:38:07.518 [2024-12-13 03:49:08.413228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.518 [2024-12-13 03:49:08.413241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.518 qpair failed and we were unable to recover it. 00:38:07.518 [2024-12-13 03:49:08.413422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.518 [2024-12-13 03:49:08.413437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.518 qpair failed and we were unable to recover it. 00:38:07.518 [2024-12-13 03:49:08.413506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.518 [2024-12-13 03:49:08.413524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.518 qpair failed and we were unable to recover it. 00:38:07.518 [2024-12-13 03:49:08.413670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.518 [2024-12-13 03:49:08.413685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.518 qpair failed and we were unable to recover it. 00:38:07.518 [2024-12-13 03:49:08.413838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.518 [2024-12-13 03:49:08.413852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.518 qpair failed and we were unable to recover it. 00:38:07.518 [2024-12-13 03:49:08.413989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.518 [2024-12-13 03:49:08.414003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.518 qpair failed and we were unable to recover it. 00:38:07.518 [2024-12-13 03:49:08.414089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.518 [2024-12-13 03:49:08.414103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.518 qpair failed and we were unable to recover it. 00:38:07.518 [2024-12-13 03:49:08.414263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.518 [2024-12-13 03:49:08.414277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.518 qpair failed and we were unable to recover it. 00:38:07.518 [2024-12-13 03:49:08.414434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.518 [2024-12-13 03:49:08.414448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.518 qpair failed and we were unable to recover it. 00:38:07.518 [2024-12-13 03:49:08.414538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.518 [2024-12-13 03:49:08.414551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.518 qpair failed and we were unable to recover it. 00:38:07.518 [2024-12-13 03:49:08.414631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.518 [2024-12-13 03:49:08.414644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.518 qpair failed and we were unable to recover it. 00:38:07.518 [2024-12-13 03:49:08.414779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.518 [2024-12-13 03:49:08.414792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.518 qpair failed and we were unable to recover it. 00:38:07.518 [2024-12-13 03:49:08.414869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.518 [2024-12-13 03:49:08.414883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.518 qpair failed and we were unable to recover it. 00:38:07.518 [2024-12-13 03:49:08.414995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.518 [2024-12-13 03:49:08.415008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.518 qpair failed and we were unable to recover it. 00:38:07.518 [2024-12-13 03:49:08.415150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.518 [2024-12-13 03:49:08.415164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.518 qpair failed and we were unable to recover it. 00:38:07.518 [2024-12-13 03:49:08.415259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.518 [2024-12-13 03:49:08.415274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.518 qpair failed and we were unable to recover it. 00:38:07.518 [2024-12-13 03:49:08.415367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.518 [2024-12-13 03:49:08.415380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.518 qpair failed and we were unable to recover it. 00:38:07.518 [2024-12-13 03:49:08.415539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.518 [2024-12-13 03:49:08.415565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.518 qpair failed and we were unable to recover it. 00:38:07.518 [2024-12-13 03:49:08.415686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.518 [2024-12-13 03:49:08.415714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.518 qpair failed and we were unable to recover it. 00:38:07.518 [2024-12-13 03:49:08.415877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.518 [2024-12-13 03:49:08.415903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.518 qpair failed and we were unable to recover it. 00:38:07.518 [2024-12-13 03:49:08.415995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.518 [2024-12-13 03:49:08.416011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.518 qpair failed and we were unable to recover it. 00:38:07.518 [2024-12-13 03:49:08.416094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.518 [2024-12-13 03:49:08.416107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.518 qpair failed and we were unable to recover it. 00:38:07.518 [2024-12-13 03:49:08.416197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.518 [2024-12-13 03:49:08.416211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.518 qpair failed and we were unable to recover it. 00:38:07.518 [2024-12-13 03:49:08.416294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.518 [2024-12-13 03:49:08.416308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.518 qpair failed and we were unable to recover it. 00:38:07.518 [2024-12-13 03:49:08.416446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.518 [2024-12-13 03:49:08.416460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.518 qpair failed and we were unable to recover it. 00:38:07.518 [2024-12-13 03:49:08.416613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.518 [2024-12-13 03:49:08.416629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.518 qpair failed and we were unable to recover it. 00:38:07.518 [2024-12-13 03:49:08.416695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.518 [2024-12-13 03:49:08.416708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.518 qpair failed and we were unable to recover it. 00:38:07.518 [2024-12-13 03:49:08.416806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.518 [2024-12-13 03:49:08.416820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.519 qpair failed and we were unable to recover it. 00:38:07.519 [2024-12-13 03:49:08.416900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.519 [2024-12-13 03:49:08.416913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.519 qpair failed and we were unable to recover it. 00:38:07.519 [2024-12-13 03:49:08.416989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.519 [2024-12-13 03:49:08.417003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.519 qpair failed and we were unable to recover it. 00:38:07.519 [2024-12-13 03:49:08.417097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.519 [2024-12-13 03:49:08.417113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.519 qpair failed and we were unable to recover it. 00:38:07.519 [2024-12-13 03:49:08.417267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.519 [2024-12-13 03:49:08.417281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.519 qpair failed and we were unable to recover it. 00:38:07.519 [2024-12-13 03:49:08.417381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.519 [2024-12-13 03:49:08.417394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.519 qpair failed and we were unable to recover it. 00:38:07.519 [2024-12-13 03:49:08.417599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.519 [2024-12-13 03:49:08.417613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.519 qpair failed and we were unable to recover it. 00:38:07.519 [2024-12-13 03:49:08.417682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.519 [2024-12-13 03:49:08.417696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.519 qpair failed and we were unable to recover it. 00:38:07.519 [2024-12-13 03:49:08.417775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.519 [2024-12-13 03:49:08.417789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.519 qpair failed and we were unable to recover it. 00:38:07.519 [2024-12-13 03:49:08.417873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.519 [2024-12-13 03:49:08.417887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.519 qpair failed and we were unable to recover it. 00:38:07.519 [2024-12-13 03:49:08.418025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.519 [2024-12-13 03:49:08.418039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.519 qpair failed and we were unable to recover it. 00:38:07.519 [2024-12-13 03:49:08.418174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.519 [2024-12-13 03:49:08.418188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.519 qpair failed and we were unable to recover it. 00:38:07.519 [2024-12-13 03:49:08.418402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.519 [2024-12-13 03:49:08.418415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.519 qpair failed and we were unable to recover it. 00:38:07.519 [2024-12-13 03:49:08.418488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.519 [2024-12-13 03:49:08.418501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.519 qpair failed and we were unable to recover it. 00:38:07.519 [2024-12-13 03:49:08.418575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.519 [2024-12-13 03:49:08.418589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.519 qpair failed and we were unable to recover it. 00:38:07.519 [2024-12-13 03:49:08.418663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.519 [2024-12-13 03:49:08.418676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.519 qpair failed and we were unable to recover it. 00:38:07.519 [2024-12-13 03:49:08.418760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.519 [2024-12-13 03:49:08.418773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.519 qpair failed and we were unable to recover it. 00:38:07.519 [2024-12-13 03:49:08.418908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.519 [2024-12-13 03:49:08.418928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.519 qpair failed and we were unable to recover it. 00:38:07.519 [2024-12-13 03:49:08.419020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.519 [2024-12-13 03:49:08.419035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.519 qpair failed and we were unable to recover it. 00:38:07.519 [2024-12-13 03:49:08.419196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.519 [2024-12-13 03:49:08.419210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.519 qpair failed and we were unable to recover it. 00:38:07.519 [2024-12-13 03:49:08.419305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.519 [2024-12-13 03:49:08.419319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.519 qpair failed and we were unable to recover it. 00:38:07.519 [2024-12-13 03:49:08.419395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.519 [2024-12-13 03:49:08.419409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.519 qpair failed and we were unable to recover it. 00:38:07.519 [2024-12-13 03:49:08.419566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.519 [2024-12-13 03:49:08.419580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.519 qpair failed and we were unable to recover it. 00:38:07.519 [2024-12-13 03:49:08.419731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.519 [2024-12-13 03:49:08.419745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.519 qpair failed and we were unable to recover it. 00:38:07.519 [2024-12-13 03:49:08.419832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.519 [2024-12-13 03:49:08.419845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.519 qpair failed and we were unable to recover it. 00:38:07.519 [2024-12-13 03:49:08.419935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.519 [2024-12-13 03:49:08.419950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.519 qpair failed and we were unable to recover it. 00:38:07.519 [2024-12-13 03:49:08.420026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.519 [2024-12-13 03:49:08.420039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.519 qpair failed and we were unable to recover it. 00:38:07.519 [2024-12-13 03:49:08.420121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.519 [2024-12-13 03:49:08.420134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.519 qpair failed and we were unable to recover it. 00:38:07.519 [2024-12-13 03:49:08.420211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.519 [2024-12-13 03:49:08.420224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.519 qpair failed and we were unable to recover it. 00:38:07.519 [2024-12-13 03:49:08.420302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.519 [2024-12-13 03:49:08.420316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.519 qpair failed and we were unable to recover it. 00:38:07.519 [2024-12-13 03:49:08.420411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.519 [2024-12-13 03:49:08.420436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.519 qpair failed and we were unable to recover it. 00:38:07.519 [2024-12-13 03:49:08.420529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.519 [2024-12-13 03:49:08.420555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.519 qpair failed and we were unable to recover it. 00:38:07.519 [2024-12-13 03:49:08.420650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.519 [2024-12-13 03:49:08.420674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:07.519 qpair failed and we were unable to recover it. 00:38:07.519 [2024-12-13 03:49:08.420759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.519 [2024-12-13 03:49:08.420775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.519 qpair failed and we were unable to recover it. 00:38:07.519 [2024-12-13 03:49:08.420852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.519 [2024-12-13 03:49:08.420865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.519 qpair failed and we were unable to recover it. 00:38:07.520 [2024-12-13 03:49:08.420956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.520 [2024-12-13 03:49:08.420971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.520 qpair failed and we were unable to recover it. 00:38:07.520 [2024-12-13 03:49:08.421059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.520 [2024-12-13 03:49:08.421072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.520 qpair failed and we were unable to recover it. 00:38:07.520 [2024-12-13 03:49:08.421272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.520 [2024-12-13 03:49:08.421286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.520 qpair failed and we were unable to recover it. 00:38:07.520 [2024-12-13 03:49:08.421356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.520 [2024-12-13 03:49:08.421369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.520 qpair failed and we were unable to recover it. 00:38:07.520 [2024-12-13 03:49:08.421538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.520 [2024-12-13 03:49:08.421551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.520 qpair failed and we were unable to recover it. 00:38:07.520 [2024-12-13 03:49:08.421638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.520 [2024-12-13 03:49:08.421652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.520 qpair failed and we were unable to recover it. 00:38:07.520 [2024-12-13 03:49:08.421752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.520 [2024-12-13 03:49:08.421765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.520 qpair failed and we were unable to recover it. 00:38:07.520 [2024-12-13 03:49:08.421908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.520 [2024-12-13 03:49:08.421930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.520 qpair failed and we were unable to recover it. 00:38:07.520 [2024-12-13 03:49:08.422053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.520 [2024-12-13 03:49:08.422070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.520 qpair failed and we were unable to recover it. 00:38:07.520 [2024-12-13 03:49:08.422162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.520 [2024-12-13 03:49:08.422177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.520 qpair failed and we were unable to recover it. 00:38:07.520 [2024-12-13 03:49:08.422264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.520 [2024-12-13 03:49:08.422284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.520 qpair failed and we were unable to recover it. 00:38:07.520 [2024-12-13 03:49:08.422426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.520 [2024-12-13 03:49:08.422441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.520 qpair failed and we were unable to recover it. 00:38:07.520 [2024-12-13 03:49:08.422516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.520 [2024-12-13 03:49:08.422529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.520 qpair failed and we were unable to recover it. 00:38:07.520 [2024-12-13 03:49:08.422668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.520 [2024-12-13 03:49:08.422682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.520 qpair failed and we were unable to recover it. 00:38:07.520 [2024-12-13 03:49:08.422763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.520 [2024-12-13 03:49:08.422776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.520 qpair failed and we were unable to recover it. 00:38:07.520 [2024-12-13 03:49:08.422874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.520 [2024-12-13 03:49:08.422888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.520 qpair failed and we were unable to recover it. 00:38:07.520 [2024-12-13 03:49:08.422974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.520 [2024-12-13 03:49:08.422988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.520 qpair failed and we were unable to recover it. 00:38:07.520 [2024-12-13 03:49:08.423085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.520 [2024-12-13 03:49:08.423099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.520 qpair failed and we were unable to recover it. 00:38:07.520 [2024-12-13 03:49:08.423174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.520 [2024-12-13 03:49:08.423188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.520 qpair failed and we were unable to recover it. 00:38:07.520 [2024-12-13 03:49:08.423266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.520 [2024-12-13 03:49:08.423280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.520 qpair failed and we were unable to recover it. 00:38:07.520 [2024-12-13 03:49:08.423364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.520 [2024-12-13 03:49:08.423377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.520 qpair failed and we were unable to recover it. 00:38:07.520 [2024-12-13 03:49:08.423445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.520 [2024-12-13 03:49:08.423460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.520 qpair failed and we were unable to recover it. 00:38:07.520 [2024-12-13 03:49:08.423545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.520 [2024-12-13 03:49:08.423559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.520 qpair failed and we were unable to recover it. 00:38:07.520 [2024-12-13 03:49:08.423632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.520 [2024-12-13 03:49:08.423646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.520 qpair failed and we were unable to recover it. 00:38:07.520 [2024-12-13 03:49:08.423741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.520 [2024-12-13 03:49:08.423754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.520 qpair failed and we were unable to recover it. 00:38:07.520 [2024-12-13 03:49:08.423905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.520 [2024-12-13 03:49:08.423925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.520 qpair failed and we were unable to recover it. 00:38:07.520 [2024-12-13 03:49:08.423998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.520 [2024-12-13 03:49:08.424012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.520 qpair failed and we were unable to recover it. 00:38:07.520 [2024-12-13 03:49:08.424083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.520 [2024-12-13 03:49:08.424097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.520 qpair failed and we were unable to recover it. 00:38:07.520 [2024-12-13 03:49:08.424173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.520 [2024-12-13 03:49:08.424187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.520 qpair failed and we were unable to recover it. 00:38:07.520 [2024-12-13 03:49:08.424352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.520 [2024-12-13 03:49:08.424366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.520 qpair failed and we were unable to recover it. 00:38:07.520 [2024-12-13 03:49:08.424526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.520 [2024-12-13 03:49:08.424540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.520 qpair failed and we were unable to recover it. 00:38:07.520 [2024-12-13 03:49:08.424646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.520 [2024-12-13 03:49:08.424660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.520 qpair failed and we were unable to recover it. 00:38:07.520 [2024-12-13 03:49:08.424740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.520 [2024-12-13 03:49:08.424754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.520 qpair failed and we were unable to recover it. 00:38:07.520 [2024-12-13 03:49:08.424823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.520 [2024-12-13 03:49:08.424836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.520 qpair failed and we were unable to recover it. 00:38:07.520 [2024-12-13 03:49:08.424927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.521 [2024-12-13 03:49:08.424941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.521 qpair failed and we were unable to recover it. 00:38:07.521 [2024-12-13 03:49:08.425038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.521 [2024-12-13 03:49:08.425062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.521 qpair failed and we were unable to recover it. 00:38:07.521 [2024-12-13 03:49:08.425174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.521 [2024-12-13 03:49:08.425195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.521 qpair failed and we were unable to recover it. 00:38:07.521 [2024-12-13 03:49:08.425279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.521 [2024-12-13 03:49:08.425300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.521 qpair failed and we were unable to recover it. 00:38:07.521 [2024-12-13 03:49:08.425388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.521 [2024-12-13 03:49:08.425408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.521 qpair failed and we were unable to recover it. 00:38:07.521 [2024-12-13 03:49:08.425501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.521 [2024-12-13 03:49:08.425522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.521 qpair failed and we were unable to recover it. 00:38:07.521 [2024-12-13 03:49:08.425623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.521 [2024-12-13 03:49:08.425644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.521 qpair failed and we were unable to recover it. 00:38:07.521 [2024-12-13 03:49:08.425788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.521 [2024-12-13 03:49:08.425809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.521 qpair failed and we were unable to recover it. 00:38:07.521 [2024-12-13 03:49:08.425913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.521 [2024-12-13 03:49:08.425942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.521 qpair failed and we were unable to recover it. 00:38:07.521 [2024-12-13 03:49:08.426092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.521 [2024-12-13 03:49:08.426113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.521 qpair failed and we were unable to recover it. 00:38:07.521 [2024-12-13 03:49:08.426261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.521 [2024-12-13 03:49:08.426275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.521 qpair failed and we were unable to recover it. 00:38:07.521 [2024-12-13 03:49:08.426424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.521 [2024-12-13 03:49:08.426437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.521 qpair failed and we were unable to recover it. 00:38:07.521 [2024-12-13 03:49:08.426517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.521 [2024-12-13 03:49:08.426530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.521 qpair failed and we were unable to recover it. 00:38:07.521 [2024-12-13 03:49:08.426599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.521 [2024-12-13 03:49:08.426612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.521 qpair failed and we were unable to recover it. 00:38:07.521 [2024-12-13 03:49:08.426683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.521 [2024-12-13 03:49:08.426698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.521 qpair failed and we were unable to recover it. 00:38:07.521 [2024-12-13 03:49:08.426825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.521 [2024-12-13 03:49:08.426838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.521 qpair failed and we were unable to recover it. 00:38:07.521 [2024-12-13 03:49:08.426939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.521 [2024-12-13 03:49:08.426953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.521 qpair failed and we were unable to recover it. 00:38:07.521 [2024-12-13 03:49:08.427032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.521 [2024-12-13 03:49:08.427046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.521 qpair failed and we were unable to recover it. 00:38:07.521 [2024-12-13 03:49:08.427117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.521 [2024-12-13 03:49:08.427130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.521 qpair failed and we were unable to recover it. 00:38:07.521 [2024-12-13 03:49:08.427204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.521 [2024-12-13 03:49:08.427217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.521 qpair failed and we were unable to recover it. 00:38:07.521 [2024-12-13 03:49:08.427296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.521 [2024-12-13 03:49:08.427309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.521 qpair failed and we were unable to recover it. 00:38:07.521 [2024-12-13 03:49:08.427444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.521 [2024-12-13 03:49:08.427457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.521 qpair failed and we were unable to recover it. 00:38:07.521 [2024-12-13 03:49:08.427532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.521 [2024-12-13 03:49:08.427545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.521 qpair failed and we were unable to recover it. 00:38:07.521 [2024-12-13 03:49:08.427635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.521 [2024-12-13 03:49:08.427648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.521 qpair failed and we were unable to recover it. 00:38:07.521 [2024-12-13 03:49:08.427782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.521 [2024-12-13 03:49:08.427795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.521 qpair failed and we were unable to recover it. 00:38:07.521 [2024-12-13 03:49:08.427959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.521 [2024-12-13 03:49:08.427973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.521 qpair failed and we were unable to recover it. 00:38:07.521 [2024-12-13 03:49:08.428044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.521 [2024-12-13 03:49:08.428058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.521 qpair failed and we were unable to recover it. 00:38:07.521 [2024-12-13 03:49:08.428193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.521 [2024-12-13 03:49:08.428206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.521 qpair failed and we were unable to recover it. 00:38:07.521 [2024-12-13 03:49:08.428373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.521 [2024-12-13 03:49:08.428386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.521 qpair failed and we were unable to recover it. 00:38:07.521 [2024-12-13 03:49:08.428612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.521 [2024-12-13 03:49:08.428626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.521 qpair failed and we were unable to recover it. 00:38:07.521 [2024-12-13 03:49:08.428691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.521 [2024-12-13 03:49:08.428704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.521 qpair failed and we were unable to recover it. 00:38:07.521 [2024-12-13 03:49:08.428798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.521 [2024-12-13 03:49:08.428811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.521 qpair failed and we were unable to recover it. 00:38:07.521 [2024-12-13 03:49:08.428945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.521 [2024-12-13 03:49:08.428958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.521 qpair failed and we were unable to recover it. 00:38:07.521 [2024-12-13 03:49:08.429042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.521 [2024-12-13 03:49:08.429055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.521 qpair failed and we were unable to recover it. 00:38:07.521 [2024-12-13 03:49:08.429135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.521 [2024-12-13 03:49:08.429148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.521 qpair failed and we were unable to recover it. 00:38:07.522 [2024-12-13 03:49:08.429242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.522 [2024-12-13 03:49:08.429255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.522 qpair failed and we were unable to recover it. 00:38:07.522 [2024-12-13 03:49:08.429353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.522 [2024-12-13 03:49:08.429366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.522 qpair failed and we were unable to recover it. 00:38:07.522 [2024-12-13 03:49:08.429512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.522 [2024-12-13 03:49:08.429526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.522 qpair failed and we were unable to recover it. 00:38:07.522 [2024-12-13 03:49:08.429616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.522 [2024-12-13 03:49:08.429629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.522 qpair failed and we were unable to recover it. 00:38:07.522 [2024-12-13 03:49:08.429776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.522 [2024-12-13 03:49:08.429789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.522 qpair failed and we were unable to recover it. 00:38:07.522 [2024-12-13 03:49:08.429860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.522 [2024-12-13 03:49:08.429873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.522 qpair failed and we were unable to recover it. 00:38:07.522 [2024-12-13 03:49:08.430050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.522 [2024-12-13 03:49:08.430078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.522 qpair failed and we were unable to recover it. 00:38:07.522 [2024-12-13 03:49:08.430252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.522 [2024-12-13 03:49:08.430274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.522 qpair failed and we were unable to recover it. 00:38:07.522 [2024-12-13 03:49:08.430470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.522 [2024-12-13 03:49:08.430491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.522 qpair failed and we were unable to recover it. 00:38:07.522 [2024-12-13 03:49:08.430603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.522 [2024-12-13 03:49:08.430623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.522 qpair failed and we were unable to recover it. 00:38:07.522 [2024-12-13 03:49:08.430784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.522 [2024-12-13 03:49:08.430804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.522 qpair failed and we were unable to recover it. 00:38:07.522 [2024-12-13 03:49:08.431032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.522 [2024-12-13 03:49:08.431054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.522 qpair failed and we were unable to recover it. 00:38:07.522 [2024-12-13 03:49:08.431147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.522 [2024-12-13 03:49:08.431163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.522 qpair failed and we were unable to recover it. 00:38:07.522 [2024-12-13 03:49:08.431230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.522 [2024-12-13 03:49:08.431243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.522 qpair failed and we were unable to recover it. 00:38:07.522 [2024-12-13 03:49:08.431379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.522 [2024-12-13 03:49:08.431392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.522 qpair failed and we were unable to recover it. 00:38:07.522 [2024-12-13 03:49:08.431537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.522 [2024-12-13 03:49:08.431550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.522 qpair failed and we were unable to recover it. 00:38:07.522 [2024-12-13 03:49:08.431694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.522 [2024-12-13 03:49:08.431708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.522 qpair failed and we were unable to recover it. 00:38:07.522 [2024-12-13 03:49:08.431934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.522 [2024-12-13 03:49:08.431947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.522 qpair failed and we were unable to recover it. 00:38:07.522 [2024-12-13 03:49:08.432092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.522 [2024-12-13 03:49:08.432105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.522 qpair failed and we were unable to recover it. 00:38:07.522 [2024-12-13 03:49:08.432259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.522 [2024-12-13 03:49:08.432273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.522 qpair failed and we were unable to recover it. 00:38:07.522 [2024-12-13 03:49:08.432371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.522 [2024-12-13 03:49:08.432393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.522 qpair failed and we were unable to recover it. 00:38:07.522 [2024-12-13 03:49:08.432471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.522 [2024-12-13 03:49:08.432484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.522 qpair failed and we were unable to recover it. 00:38:07.522 [2024-12-13 03:49:08.432581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.522 [2024-12-13 03:49:08.432594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.522 qpair failed and we were unable to recover it. 00:38:07.522 [2024-12-13 03:49:08.432735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.522 [2024-12-13 03:49:08.432749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.522 qpair failed and we were unable to recover it. 00:38:07.522 [2024-12-13 03:49:08.432833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.522 [2024-12-13 03:49:08.432846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.522 qpair failed and we were unable to recover it. 00:38:07.522 [2024-12-13 03:49:08.432989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.522 [2024-12-13 03:49:08.433003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.522 qpair failed and we were unable to recover it. 00:38:07.522 [2024-12-13 03:49:08.433154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.522 [2024-12-13 03:49:08.433168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.522 qpair failed and we were unable to recover it. 00:38:07.522 [2024-12-13 03:49:08.433236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.522 [2024-12-13 03:49:08.433249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.522 qpair failed and we were unable to recover it. 00:38:07.522 [2024-12-13 03:49:08.433399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.522 [2024-12-13 03:49:08.433413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.522 qpair failed and we were unable to recover it. 00:38:07.522 [2024-12-13 03:49:08.433572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.522 [2024-12-13 03:49:08.433586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.522 qpair failed and we were unable to recover it. 00:38:07.522 [2024-12-13 03:49:08.433670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.522 [2024-12-13 03:49:08.433684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.522 qpair failed and we were unable to recover it. 00:38:07.522 [2024-12-13 03:49:08.433748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.522 [2024-12-13 03:49:08.433761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.522 qpair failed and we were unable to recover it. 00:38:07.522 [2024-12-13 03:49:08.433834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.522 [2024-12-13 03:49:08.433847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.522 qpair failed and we were unable to recover it. 00:38:07.522 [2024-12-13 03:49:08.434000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.522 [2024-12-13 03:49:08.434014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.522 qpair failed and we were unable to recover it. 00:38:07.522 [2024-12-13 03:49:08.434165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.523 [2024-12-13 03:49:08.434179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.523 qpair failed and we were unable to recover it. 00:38:07.523 [2024-12-13 03:49:08.434254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.523 [2024-12-13 03:49:08.434267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.523 qpair failed and we were unable to recover it. 00:38:07.523 [2024-12-13 03:49:08.434341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.523 [2024-12-13 03:49:08.434354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.523 qpair failed and we were unable to recover it. 00:38:07.523 [2024-12-13 03:49:08.434497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.523 [2024-12-13 03:49:08.434510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.523 qpair failed and we were unable to recover it. 00:38:07.523 [2024-12-13 03:49:08.434592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.523 [2024-12-13 03:49:08.434606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.523 qpair failed and we were unable to recover it. 00:38:07.523 [2024-12-13 03:49:08.434678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.523 [2024-12-13 03:49:08.434691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.523 qpair failed and we were unable to recover it. 00:38:07.523 [2024-12-13 03:49:08.434778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.523 [2024-12-13 03:49:08.434791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.523 qpair failed and we were unable to recover it. 00:38:07.523 [2024-12-13 03:49:08.434941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.523 [2024-12-13 03:49:08.434955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.523 qpair failed and we were unable to recover it. 00:38:07.523 [2024-12-13 03:49:08.435098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.523 [2024-12-13 03:49:08.435112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.523 qpair failed and we were unable to recover it. 00:38:07.523 [2024-12-13 03:49:08.435180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.523 [2024-12-13 03:49:08.435193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.523 qpair failed and we were unable to recover it. 00:38:07.523 [2024-12-13 03:49:08.435336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.523 [2024-12-13 03:49:08.435349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.523 qpair failed and we were unable to recover it. 00:38:07.523 [2024-12-13 03:49:08.435429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.523 [2024-12-13 03:49:08.435442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.523 qpair failed and we were unable to recover it. 00:38:07.523 [2024-12-13 03:49:08.435617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.523 [2024-12-13 03:49:08.435633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.523 qpair failed and we were unable to recover it. 00:38:07.523 [2024-12-13 03:49:08.435778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.523 [2024-12-13 03:49:08.435791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.523 qpair failed and we were unable to recover it. 00:38:07.523 [2024-12-13 03:49:08.435878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.523 [2024-12-13 03:49:08.435891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.523 qpair failed and we were unable to recover it. 00:38:07.523 [2024-12-13 03:49:08.435970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.523 [2024-12-13 03:49:08.435984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.523 qpair failed and we were unable to recover it. 00:38:07.523 [2024-12-13 03:49:08.436159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.523 [2024-12-13 03:49:08.436173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.523 qpair failed and we were unable to recover it. 00:38:07.523 [2024-12-13 03:49:08.436313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.523 [2024-12-13 03:49:08.436327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.523 qpair failed and we were unable to recover it. 00:38:07.523 [2024-12-13 03:49:08.436414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.523 [2024-12-13 03:49:08.436427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.523 qpair failed and we were unable to recover it. 00:38:07.523 [2024-12-13 03:49:08.436492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.523 [2024-12-13 03:49:08.436505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.523 qpair failed and we were unable to recover it. 00:38:07.523 [2024-12-13 03:49:08.436588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.523 [2024-12-13 03:49:08.436601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.523 qpair failed and we were unable to recover it. 00:38:07.523 [2024-12-13 03:49:08.436683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.523 [2024-12-13 03:49:08.436696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.523 qpair failed and we were unable to recover it. 00:38:07.523 [2024-12-13 03:49:08.436829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.523 [2024-12-13 03:49:08.436843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.523 qpair failed and we were unable to recover it. 00:38:07.523 [2024-12-13 03:49:08.436932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.523 [2024-12-13 03:49:08.436946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.523 qpair failed and we were unable to recover it. 00:38:07.523 [2024-12-13 03:49:08.437078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.523 [2024-12-13 03:49:08.437091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.523 qpair failed and we were unable to recover it. 00:38:07.523 [2024-12-13 03:49:08.437175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.523 [2024-12-13 03:49:08.437189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.523 qpair failed and we were unable to recover it. 00:38:07.523 [2024-12-13 03:49:08.437325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.523 [2024-12-13 03:49:08.437338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.523 qpair failed and we were unable to recover it. 00:38:07.523 [2024-12-13 03:49:08.437546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.523 [2024-12-13 03:49:08.437561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.523 qpair failed and we were unable to recover it. 00:38:07.523 [2024-12-13 03:49:08.437727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.523 [2024-12-13 03:49:08.437740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.523 qpair failed and we were unable to recover it. 00:38:07.523 [2024-12-13 03:49:08.437821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.523 [2024-12-13 03:49:08.437834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.523 qpair failed and we were unable to recover it. 00:38:07.523 [2024-12-13 03:49:08.437931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.524 [2024-12-13 03:49:08.437945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.524 qpair failed and we were unable to recover it. 00:38:07.524 [2024-12-13 03:49:08.438034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.524 [2024-12-13 03:49:08.438047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.524 qpair failed and we were unable to recover it. 00:38:07.524 [2024-12-13 03:49:08.438120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.524 [2024-12-13 03:49:08.438133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.524 qpair failed and we were unable to recover it. 00:38:07.524 [2024-12-13 03:49:08.438245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.524 [2024-12-13 03:49:08.438259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.524 qpair failed and we were unable to recover it. 00:38:07.524 [2024-12-13 03:49:08.438332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.524 [2024-12-13 03:49:08.438345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.524 qpair failed and we were unable to recover it. 00:38:07.524 [2024-12-13 03:49:08.438517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.524 [2024-12-13 03:49:08.438530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.524 qpair failed and we were unable to recover it. 00:38:07.524 [2024-12-13 03:49:08.438627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.524 [2024-12-13 03:49:08.438640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.524 qpair failed and we were unable to recover it. 00:38:07.524 [2024-12-13 03:49:08.438859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.524 [2024-12-13 03:49:08.438873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.524 qpair failed and we were unable to recover it. 00:38:07.524 [2024-12-13 03:49:08.438970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.524 [2024-12-13 03:49:08.438984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.524 qpair failed and we were unable to recover it. 00:38:07.524 [2024-12-13 03:49:08.439124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.524 [2024-12-13 03:49:08.439138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.524 qpair failed and we were unable to recover it. 00:38:07.524 [2024-12-13 03:49:08.439217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.524 [2024-12-13 03:49:08.439230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.524 qpair failed and we were unable to recover it. 00:38:07.524 [2024-12-13 03:49:08.439375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.524 [2024-12-13 03:49:08.439388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.524 qpair failed and we were unable to recover it. 00:38:07.524 [2024-12-13 03:49:08.439480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.524 [2024-12-13 03:49:08.439493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.524 qpair failed and we were unable to recover it. 00:38:07.524 [2024-12-13 03:49:08.439572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.524 [2024-12-13 03:49:08.439586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.524 qpair failed and we were unable to recover it. 00:38:07.524 [2024-12-13 03:49:08.439676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.524 [2024-12-13 03:49:08.439689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.524 qpair failed and we were unable to recover it. 00:38:07.524 [2024-12-13 03:49:08.439847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.524 [2024-12-13 03:49:08.439860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.524 qpair failed and we were unable to recover it. 00:38:07.524 [2024-12-13 03:49:08.439944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.524 [2024-12-13 03:49:08.439957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.524 qpair failed and we were unable to recover it. 00:38:07.524 [2024-12-13 03:49:08.440035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.524 [2024-12-13 03:49:08.440048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.524 qpair failed and we were unable to recover it. 00:38:07.524 [2024-12-13 03:49:08.440189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.524 [2024-12-13 03:49:08.440203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.524 qpair failed and we were unable to recover it. 00:38:07.524 [2024-12-13 03:49:08.440325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.524 [2024-12-13 03:49:08.440339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.524 qpair failed and we were unable to recover it. 00:38:07.524 [2024-12-13 03:49:08.440410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.524 [2024-12-13 03:49:08.440432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.524 qpair failed and we were unable to recover it. 00:38:07.524 [2024-12-13 03:49:08.440568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.524 [2024-12-13 03:49:08.440581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.524 qpair failed and we were unable to recover it. 00:38:07.524 [2024-12-13 03:49:08.440719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.524 [2024-12-13 03:49:08.440735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.524 qpair failed and we were unable to recover it. 00:38:07.524 [2024-12-13 03:49:08.440805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.524 [2024-12-13 03:49:08.440818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.524 qpair failed and we were unable to recover it. 00:38:07.524 [2024-12-13 03:49:08.440993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.524 [2024-12-13 03:49:08.441007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.524 qpair failed and we were unable to recover it. 00:38:07.524 [2024-12-13 03:49:08.441084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.524 [2024-12-13 03:49:08.441097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.524 qpair failed and we were unable to recover it. 00:38:07.524 [2024-12-13 03:49:08.441175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.524 [2024-12-13 03:49:08.441188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.524 qpair failed and we were unable to recover it. 00:38:07.524 [2024-12-13 03:49:08.441363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.524 [2024-12-13 03:49:08.441377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.524 qpair failed and we were unable to recover it. 00:38:07.524 [2024-12-13 03:49:08.441510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.524 [2024-12-13 03:49:08.441523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.524 qpair failed and we were unable to recover it. 00:38:07.524 [2024-12-13 03:49:08.441672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.524 [2024-12-13 03:49:08.441685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.524 qpair failed and we were unable to recover it. 00:38:07.524 [2024-12-13 03:49:08.441832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.524 [2024-12-13 03:49:08.441845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.524 qpair failed and we were unable to recover it. 00:38:07.524 [2024-12-13 03:49:08.441923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.524 [2024-12-13 03:49:08.441936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.524 qpair failed and we were unable to recover it. 00:38:07.524 [2024-12-13 03:49:08.442086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.524 [2024-12-13 03:49:08.442099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.524 qpair failed and we were unable to recover it. 00:38:07.524 [2024-12-13 03:49:08.442170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.524 [2024-12-13 03:49:08.442183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.524 qpair failed and we were unable to recover it. 00:38:07.524 [2024-12-13 03:49:08.442343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.524 [2024-12-13 03:49:08.442356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.524 qpair failed and we were unable to recover it. 00:38:07.525 [2024-12-13 03:49:08.442507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.525 [2024-12-13 03:49:08.442520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.525 qpair failed and we were unable to recover it. 00:38:07.525 [2024-12-13 03:49:08.442673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.525 [2024-12-13 03:49:08.442686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.525 qpair failed and we were unable to recover it. 00:38:07.525 [2024-12-13 03:49:08.442754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.525 [2024-12-13 03:49:08.442767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.525 qpair failed and we were unable to recover it. 00:38:07.525 [2024-12-13 03:49:08.442855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.525 [2024-12-13 03:49:08.442868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.525 qpair failed and we were unable to recover it. 00:38:07.525 [2024-12-13 03:49:08.443027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.525 [2024-12-13 03:49:08.443041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.525 qpair failed and we were unable to recover it. 00:38:07.525 [2024-12-13 03:49:08.443198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.525 [2024-12-13 03:49:08.443213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.525 qpair failed and we were unable to recover it. 00:38:07.525 [2024-12-13 03:49:08.443361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.525 [2024-12-13 03:49:08.443374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.525 qpair failed and we were unable to recover it. 00:38:07.525 [2024-12-13 03:49:08.443439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.525 [2024-12-13 03:49:08.443452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.525 qpair failed and we were unable to recover it. 00:38:07.525 [2024-12-13 03:49:08.443539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.525 [2024-12-13 03:49:08.443552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.525 qpair failed and we were unable to recover it. 00:38:07.525 [2024-12-13 03:49:08.443633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.525 [2024-12-13 03:49:08.443646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.525 qpair failed and we were unable to recover it. 00:38:07.525 [2024-12-13 03:49:08.443736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.525 [2024-12-13 03:49:08.443749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.525 qpair failed and we were unable to recover it. 00:38:07.525 [2024-12-13 03:49:08.443903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.525 [2024-12-13 03:49:08.443916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.525 qpair failed and we were unable to recover it. 00:38:07.525 [2024-12-13 03:49:08.444065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.525 [2024-12-13 03:49:08.444078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.525 qpair failed and we were unable to recover it. 00:38:07.525 [2024-12-13 03:49:08.444158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.525 [2024-12-13 03:49:08.444172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.525 qpair failed and we were unable to recover it. 00:38:07.525 [2024-12-13 03:49:08.444338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.525 [2024-12-13 03:49:08.444352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.525 qpair failed and we were unable to recover it. 00:38:07.525 [2024-12-13 03:49:08.444484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.525 [2024-12-13 03:49:08.444498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.525 qpair failed and we were unable to recover it. 00:38:07.525 [2024-12-13 03:49:08.444585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.525 [2024-12-13 03:49:08.444597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.525 qpair failed and we were unable to recover it. 00:38:07.525 [2024-12-13 03:49:08.444781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.525 [2024-12-13 03:49:08.444794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.525 qpair failed and we were unable to recover it. 00:38:07.525 [2024-12-13 03:49:08.444970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.525 [2024-12-13 03:49:08.444984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.525 qpair failed and we were unable to recover it. 00:38:07.525 [2024-12-13 03:49:08.445148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.525 [2024-12-13 03:49:08.445161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.525 qpair failed and we were unable to recover it. 00:38:07.525 [2024-12-13 03:49:08.445409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.525 [2024-12-13 03:49:08.445422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.525 qpair failed and we were unable to recover it. 00:38:07.525 [2024-12-13 03:49:08.445509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.525 [2024-12-13 03:49:08.445522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.525 qpair failed and we were unable to recover it. 00:38:07.525 [2024-12-13 03:49:08.445604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.525 [2024-12-13 03:49:08.445616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.525 qpair failed and we were unable to recover it. 00:38:07.525 [2024-12-13 03:49:08.445715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.525 [2024-12-13 03:49:08.445728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.525 qpair failed and we were unable to recover it. 00:38:07.525 [2024-12-13 03:49:08.445872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.525 [2024-12-13 03:49:08.445885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.525 qpair failed and we were unable to recover it. 00:38:07.525 [2024-12-13 03:49:08.446031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.525 [2024-12-13 03:49:08.446045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.525 qpair failed and we were unable to recover it. 00:38:07.525 [2024-12-13 03:49:08.446129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.525 [2024-12-13 03:49:08.446144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.525 qpair failed and we were unable to recover it. 00:38:07.525 [2024-12-13 03:49:08.446281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.525 [2024-12-13 03:49:08.446296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.525 qpair failed and we were unable to recover it. 00:38:07.525 [2024-12-13 03:49:08.446442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.525 [2024-12-13 03:49:08.446455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.525 qpair failed and we were unable to recover it. 00:38:07.525 [2024-12-13 03:49:08.446612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.525 [2024-12-13 03:49:08.446625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.525 qpair failed and we were unable to recover it. 00:38:07.525 [2024-12-13 03:49:08.446715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.525 [2024-12-13 03:49:08.446728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.525 qpair failed and we were unable to recover it. 00:38:07.525 [2024-12-13 03:49:08.446871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.525 [2024-12-13 03:49:08.446885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.525 qpair failed and we were unable to recover it. 00:38:07.525 [2024-12-13 03:49:08.447035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.525 [2024-12-13 03:49:08.447049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.525 qpair failed and we were unable to recover it. 00:38:07.525 [2024-12-13 03:49:08.447180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.525 [2024-12-13 03:49:08.447194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.525 qpair failed and we were unable to recover it. 00:38:07.525 [2024-12-13 03:49:08.447462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.526 [2024-12-13 03:49:08.447476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.526 qpair failed and we were unable to recover it. 00:38:07.526 [2024-12-13 03:49:08.447558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.526 [2024-12-13 03:49:08.447571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.526 qpair failed and we were unable to recover it. 00:38:07.526 [2024-12-13 03:49:08.447675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.526 [2024-12-13 03:49:08.447687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.526 qpair failed and we were unable to recover it. 00:38:07.526 [2024-12-13 03:49:08.447769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.526 [2024-12-13 03:49:08.447783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.526 qpair failed and we were unable to recover it. 00:38:07.526 [2024-12-13 03:49:08.447866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.526 [2024-12-13 03:49:08.447878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.526 qpair failed and we were unable to recover it. 00:38:07.526 [2024-12-13 03:49:08.448018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.526 [2024-12-13 03:49:08.448032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.526 qpair failed and we were unable to recover it. 00:38:07.526 [2024-12-13 03:49:08.448119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.526 [2024-12-13 03:49:08.448132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.526 qpair failed and we were unable to recover it. 00:38:07.526 [2024-12-13 03:49:08.448225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.526 [2024-12-13 03:49:08.448238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.526 qpair failed and we were unable to recover it. 00:38:07.526 [2024-12-13 03:49:08.448340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.526 [2024-12-13 03:49:08.448353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.526 qpair failed and we were unable to recover it. 00:38:07.526 [2024-12-13 03:49:08.448491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.526 [2024-12-13 03:49:08.448505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.526 qpair failed and we were unable to recover it. 00:38:07.526 [2024-12-13 03:49:08.448734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.526 [2024-12-13 03:49:08.448747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.526 qpair failed and we were unable to recover it. 00:38:07.526 [2024-12-13 03:49:08.448949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.526 [2024-12-13 03:49:08.448963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.526 qpair failed and we were unable to recover it. 00:38:07.526 [2024-12-13 03:49:08.449045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.526 [2024-12-13 03:49:08.449059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.526 qpair failed and we were unable to recover it. 00:38:07.526 [2024-12-13 03:49:08.449137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.526 [2024-12-13 03:49:08.449151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.526 qpair failed and we were unable to recover it. 00:38:07.526 [2024-12-13 03:49:08.449254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.526 [2024-12-13 03:49:08.449268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.526 qpair failed and we were unable to recover it. 00:38:07.526 [2024-12-13 03:49:08.449493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.526 [2024-12-13 03:49:08.449512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.526 qpair failed and we were unable to recover it. 00:38:07.526 [2024-12-13 03:49:08.449615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.526 [2024-12-13 03:49:08.449629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.526 qpair failed and we were unable to recover it. 00:38:07.526 [2024-12-13 03:49:08.449800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.526 [2024-12-13 03:49:08.449814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.526 qpair failed and we were unable to recover it. 00:38:07.526 [2024-12-13 03:49:08.449977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.526 [2024-12-13 03:49:08.449991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.526 qpair failed and we were unable to recover it. 00:38:07.526 [2024-12-13 03:49:08.450123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.526 [2024-12-13 03:49:08.450136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.526 qpair failed and we were unable to recover it. 00:38:07.526 [2024-12-13 03:49:08.450349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.526 [2024-12-13 03:49:08.450363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.526 qpair failed and we were unable to recover it. 00:38:07.526 [2024-12-13 03:49:08.450463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.526 [2024-12-13 03:49:08.450476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.526 qpair failed and we were unable to recover it. 00:38:07.526 [2024-12-13 03:49:08.450561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.526 [2024-12-13 03:49:08.450575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.526 qpair failed and we were unable to recover it. 00:38:07.526 [2024-12-13 03:49:08.450653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.526 [2024-12-13 03:49:08.450666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.526 qpair failed and we were unable to recover it. 00:38:07.526 [2024-12-13 03:49:08.450802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.526 [2024-12-13 03:49:08.450815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.526 qpair failed and we were unable to recover it. 00:38:07.526 [2024-12-13 03:49:08.450952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.526 [2024-12-13 03:49:08.450967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.526 qpair failed and we were unable to recover it. 00:38:07.526 [2024-12-13 03:49:08.451171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.526 [2024-12-13 03:49:08.451185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.526 qpair failed and we were unable to recover it. 00:38:07.526 [2024-12-13 03:49:08.451262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.526 [2024-12-13 03:49:08.451275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.526 qpair failed and we were unable to recover it. 00:38:07.526 [2024-12-13 03:49:08.451421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.526 [2024-12-13 03:49:08.451434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.526 qpair failed and we were unable to recover it. 00:38:07.526 [2024-12-13 03:49:08.451524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.526 [2024-12-13 03:49:08.451537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.526 qpair failed and we were unable to recover it. 00:38:07.526 [2024-12-13 03:49:08.451670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.526 [2024-12-13 03:49:08.451683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.526 qpair failed and we were unable to recover it. 00:38:07.526 [2024-12-13 03:49:08.451753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.526 [2024-12-13 03:49:08.451766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.526 qpair failed and we were unable to recover it. 00:38:07.526 [2024-12-13 03:49:08.451929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.526 [2024-12-13 03:49:08.451943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.526 qpair failed and we were unable to recover it. 00:38:07.526 [2024-12-13 03:49:08.452191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.526 [2024-12-13 03:49:08.452207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.526 qpair failed and we were unable to recover it. 00:38:07.526 [2024-12-13 03:49:08.452343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.526 [2024-12-13 03:49:08.452356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.527 qpair failed and we were unable to recover it. 00:38:07.527 [2024-12-13 03:49:08.452450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.527 [2024-12-13 03:49:08.452464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.527 qpair failed and we were unable to recover it. 00:38:07.527 [2024-12-13 03:49:08.452551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.527 [2024-12-13 03:49:08.452564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.527 qpair failed and we were unable to recover it. 00:38:07.527 [2024-12-13 03:49:08.452780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.527 [2024-12-13 03:49:08.452794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.527 qpair failed and we were unable to recover it. 00:38:07.527 [2024-12-13 03:49:08.452943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.527 [2024-12-13 03:49:08.452957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.527 qpair failed and we were unable to recover it. 00:38:07.527 [2024-12-13 03:49:08.453171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.527 [2024-12-13 03:49:08.453185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.527 qpair failed and we were unable to recover it. 00:38:07.527 [2024-12-13 03:49:08.453275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.527 [2024-12-13 03:49:08.453289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.527 qpair failed and we were unable to recover it. 00:38:07.527 [2024-12-13 03:49:08.453441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.527 [2024-12-13 03:49:08.453455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.527 qpair failed and we were unable to recover it. 00:38:07.527 [2024-12-13 03:49:08.453686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.527 [2024-12-13 03:49:08.453700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.527 qpair failed and we were unable to recover it. 00:38:07.527 [2024-12-13 03:49:08.453773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.527 [2024-12-13 03:49:08.453787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.527 qpair failed and we were unable to recover it. 00:38:07.527 [2024-12-13 03:49:08.454008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.527 [2024-12-13 03:49:08.454022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.527 qpair failed and we were unable to recover it. 00:38:07.527 [2024-12-13 03:49:08.454242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.527 [2024-12-13 03:49:08.454255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.527 qpair failed and we were unable to recover it. 00:38:07.527 [2024-12-13 03:49:08.454348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.527 [2024-12-13 03:49:08.454361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.527 qpair failed and we were unable to recover it. 00:38:07.527 [2024-12-13 03:49:08.454515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.527 [2024-12-13 03:49:08.454529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.527 qpair failed and we were unable to recover it. 00:38:07.527 [2024-12-13 03:49:08.454679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.527 [2024-12-13 03:49:08.454692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.527 qpair failed and we were unable to recover it. 00:38:07.527 [2024-12-13 03:49:08.454854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.527 [2024-12-13 03:49:08.454868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.527 qpair failed and we were unable to recover it. 00:38:07.527 [2024-12-13 03:49:08.455105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.527 [2024-12-13 03:49:08.455120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.527 qpair failed and we were unable to recover it. 00:38:07.527 [2024-12-13 03:49:08.455332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.527 [2024-12-13 03:49:08.455346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.527 qpair failed and we were unable to recover it. 00:38:07.527 [2024-12-13 03:49:08.455520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.527 [2024-12-13 03:49:08.455534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.527 qpair failed and we were unable to recover it. 00:38:07.527 [2024-12-13 03:49:08.455698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.527 [2024-12-13 03:49:08.455712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.527 qpair failed and we were unable to recover it. 00:38:07.527 [2024-12-13 03:49:08.455846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.527 [2024-12-13 03:49:08.455860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.527 qpair failed and we were unable to recover it. 00:38:07.527 [2024-12-13 03:49:08.455994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.527 [2024-12-13 03:49:08.456008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.527 qpair failed and we were unable to recover it. 00:38:07.527 [2024-12-13 03:49:08.456173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.527 [2024-12-13 03:49:08.456187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.527 qpair failed and we were unable to recover it. 00:38:07.527 [2024-12-13 03:49:08.456354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.527 [2024-12-13 03:49:08.456368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.527 qpair failed and we were unable to recover it. 00:38:07.527 [2024-12-13 03:49:08.456507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.527 [2024-12-13 03:49:08.456521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.527 qpair failed and we were unable to recover it. 00:38:07.527 [2024-12-13 03:49:08.456723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.527 [2024-12-13 03:49:08.456737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.527 qpair failed and we were unable to recover it. 00:38:07.527 [2024-12-13 03:49:08.456891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.527 [2024-12-13 03:49:08.456904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.527 qpair failed and we were unable to recover it. 00:38:07.527 [2024-12-13 03:49:08.457065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.527 [2024-12-13 03:49:08.457079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.527 qpair failed and we were unable to recover it. 00:38:07.527 [2024-12-13 03:49:08.457161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.527 [2024-12-13 03:49:08.457174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.527 qpair failed and we were unable to recover it. 00:38:07.527 [2024-12-13 03:49:08.457319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.527 [2024-12-13 03:49:08.457332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.527 qpair failed and we were unable to recover it. 00:38:07.527 [2024-12-13 03:49:08.457485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.527 [2024-12-13 03:49:08.457499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.527 qpair failed and we were unable to recover it. 00:38:07.527 [2024-12-13 03:49:08.457579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.527 [2024-12-13 03:49:08.457593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.527 qpair failed and we were unable to recover it. 00:38:07.527 [2024-12-13 03:49:08.457740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.527 [2024-12-13 03:49:08.457753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.527 qpair failed and we were unable to recover it. 00:38:07.527 [2024-12-13 03:49:08.457836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.527 [2024-12-13 03:49:08.457849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.527 qpair failed and we were unable to recover it. 00:38:07.527 [2024-12-13 03:49:08.458004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.527 [2024-12-13 03:49:08.458018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.527 qpair failed and we were unable to recover it. 00:38:07.527 [2024-12-13 03:49:08.458105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.528 [2024-12-13 03:49:08.458118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.528 qpair failed and we were unable to recover it. 00:38:07.528 [2024-12-13 03:49:08.458201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.528 [2024-12-13 03:49:08.458214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.528 qpair failed and we were unable to recover it. 00:38:07.528 [2024-12-13 03:49:08.458300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.528 [2024-12-13 03:49:08.458313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.528 qpair failed and we were unable to recover it. 00:38:07.528 [2024-12-13 03:49:08.458410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.528 [2024-12-13 03:49:08.458424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.528 qpair failed and we were unable to recover it. 00:38:07.528 [2024-12-13 03:49:08.458509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.528 [2024-12-13 03:49:08.458525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.528 qpair failed and we were unable to recover it. 00:38:07.528 [2024-12-13 03:49:08.458666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.528 [2024-12-13 03:49:08.458679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.528 qpair failed and we were unable to recover it. 00:38:07.528 [2024-12-13 03:49:08.458777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.528 [2024-12-13 03:49:08.458791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.528 qpair failed and we were unable to recover it. 00:38:07.528 [2024-12-13 03:49:08.458943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.528 [2024-12-13 03:49:08.458957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.528 qpair failed and we were unable to recover it. 00:38:07.528 [2024-12-13 03:49:08.459039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.528 [2024-12-13 03:49:08.459053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.528 qpair failed and we were unable to recover it. 00:38:07.528 [2024-12-13 03:49:08.459259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.528 [2024-12-13 03:49:08.459273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.528 qpair failed and we were unable to recover it. 00:38:07.528 [2024-12-13 03:49:08.459375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.528 [2024-12-13 03:49:08.459394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.528 qpair failed and we were unable to recover it. 00:38:07.528 [2024-12-13 03:49:08.459625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.528 [2024-12-13 03:49:08.459639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.528 qpair failed and we were unable to recover it. 00:38:07.528 [2024-12-13 03:49:08.459732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.528 [2024-12-13 03:49:08.459745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.528 qpair failed and we were unable to recover it. 00:38:07.528 [2024-12-13 03:49:08.459884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.528 [2024-12-13 03:49:08.459897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.528 qpair failed and we were unable to recover it. 00:38:07.528 [2024-12-13 03:49:08.459980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.528 [2024-12-13 03:49:08.459994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.528 qpair failed and we were unable to recover it. 00:38:07.528 [2024-12-13 03:49:08.460093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.528 [2024-12-13 03:49:08.460107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.528 qpair failed and we were unable to recover it. 00:38:07.528 [2024-12-13 03:49:08.460190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.528 [2024-12-13 03:49:08.460203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.528 qpair failed and we were unable to recover it. 00:38:07.528 [2024-12-13 03:49:08.460283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.528 [2024-12-13 03:49:08.460297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.528 qpair failed and we were unable to recover it. 00:38:07.528 [2024-12-13 03:49:08.460436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.528 [2024-12-13 03:49:08.460449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.528 qpair failed and we were unable to recover it. 00:38:07.528 [2024-12-13 03:49:08.460583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.528 [2024-12-13 03:49:08.460597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.528 qpair failed and we were unable to recover it. 00:38:07.528 [2024-12-13 03:49:08.460728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.528 [2024-12-13 03:49:08.460742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.528 qpair failed and we were unable to recover it. 00:38:07.528 [2024-12-13 03:49:08.460891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.528 [2024-12-13 03:49:08.460904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.528 qpair failed and we were unable to recover it. 00:38:07.528 [2024-12-13 03:49:08.461054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.528 [2024-12-13 03:49:08.461068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.528 qpair failed and we were unable to recover it. 00:38:07.528 [2024-12-13 03:49:08.461240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.528 [2024-12-13 03:49:08.461254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.528 qpair failed and we were unable to recover it. 00:38:07.528 [2024-12-13 03:49:08.461336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.528 [2024-12-13 03:49:08.461349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.528 qpair failed and we were unable to recover it. 00:38:07.528 [2024-12-13 03:49:08.461524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.528 [2024-12-13 03:49:08.461537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.528 qpair failed and we were unable to recover it. 00:38:07.528 [2024-12-13 03:49:08.461690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.528 [2024-12-13 03:49:08.461703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.528 qpair failed and we were unable to recover it. 00:38:07.528 [2024-12-13 03:49:08.461790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.528 [2024-12-13 03:49:08.461804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.528 qpair failed and we were unable to recover it. 00:38:07.528 [2024-12-13 03:49:08.461974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.528 [2024-12-13 03:49:08.461987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.528 qpair failed and we were unable to recover it. 00:38:07.528 [2024-12-13 03:49:08.462053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.528 [2024-12-13 03:49:08.462067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.528 qpair failed and we were unable to recover it. 00:38:07.528 [2024-12-13 03:49:08.462208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.528 [2024-12-13 03:49:08.462221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.528 qpair failed and we were unable to recover it. 00:38:07.528 [2024-12-13 03:49:08.462369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.529 [2024-12-13 03:49:08.462383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.529 qpair failed and we were unable to recover it. 00:38:07.529 [2024-12-13 03:49:08.462597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.529 [2024-12-13 03:49:08.462611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.529 qpair failed and we were unable to recover it. 00:38:07.529 [2024-12-13 03:49:08.462693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.529 [2024-12-13 03:49:08.462706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.529 qpair failed and we were unable to recover it. 00:38:07.529 [2024-12-13 03:49:08.462801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.529 [2024-12-13 03:49:08.462814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.529 qpair failed and we were unable to recover it. 00:38:07.529 [2024-12-13 03:49:08.463015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.529 [2024-12-13 03:49:08.463029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.529 qpair failed and we were unable to recover it. 00:38:07.529 [2024-12-13 03:49:08.463174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.529 [2024-12-13 03:49:08.463188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.529 qpair failed and we were unable to recover it. 00:38:07.529 [2024-12-13 03:49:08.463280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.529 [2024-12-13 03:49:08.463294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.529 qpair failed and we were unable to recover it. 00:38:07.529 [2024-12-13 03:49:08.463530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.529 [2024-12-13 03:49:08.463544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.529 qpair failed and we were unable to recover it. 00:38:07.529 [2024-12-13 03:49:08.463695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.529 [2024-12-13 03:49:08.463709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.529 qpair failed and we were unable to recover it. 00:38:07.529 [2024-12-13 03:49:08.463778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.529 [2024-12-13 03:49:08.463792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.529 qpair failed and we were unable to recover it. 00:38:07.529 [2024-12-13 03:49:08.463944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.529 [2024-12-13 03:49:08.463958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.529 qpair failed and we were unable to recover it. 00:38:07.529 [2024-12-13 03:49:08.464109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.529 [2024-12-13 03:49:08.464123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.529 qpair failed and we were unable to recover it. 00:38:07.529 [2024-12-13 03:49:08.464192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.529 [2024-12-13 03:49:08.464206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.529 qpair failed and we were unable to recover it. 00:38:07.529 [2024-12-13 03:49:08.464347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.529 [2024-12-13 03:49:08.464363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.529 qpair failed and we were unable to recover it. 00:38:07.529 [2024-12-13 03:49:08.464461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.529 [2024-12-13 03:49:08.464474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.529 qpair failed and we were unable to recover it. 00:38:07.529 [2024-12-13 03:49:08.464565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.529 [2024-12-13 03:49:08.464578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.529 qpair failed and we were unable to recover it. 00:38:07.529 [2024-12-13 03:49:08.464650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.529 [2024-12-13 03:49:08.464662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.529 qpair failed and we were unable to recover it. 00:38:07.529 [2024-12-13 03:49:08.464732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.529 [2024-12-13 03:49:08.464745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.529 qpair failed and we were unable to recover it. 00:38:07.529 [2024-12-13 03:49:08.464836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.529 [2024-12-13 03:49:08.464849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.529 qpair failed and we were unable to recover it. 00:38:07.529 [2024-12-13 03:49:08.465013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.529 [2024-12-13 03:49:08.465027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.529 qpair failed and we were unable to recover it. 00:38:07.529 [2024-12-13 03:49:08.465111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.529 [2024-12-13 03:49:08.465124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.529 qpair failed and we were unable to recover it. 00:38:07.529 [2024-12-13 03:49:08.465322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.529 [2024-12-13 03:49:08.465336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.529 qpair failed and we were unable to recover it. 00:38:07.529 [2024-12-13 03:49:08.465432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.529 [2024-12-13 03:49:08.465445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.529 qpair failed and we were unable to recover it. 00:38:07.529 [2024-12-13 03:49:08.465511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.529 [2024-12-13 03:49:08.465524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.529 qpair failed and we were unable to recover it. 00:38:07.529 [2024-12-13 03:49:08.465615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.529 [2024-12-13 03:49:08.465628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.529 qpair failed and we were unable to recover it. 00:38:07.529 [2024-12-13 03:49:08.465827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.529 [2024-12-13 03:49:08.465840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.529 qpair failed and we were unable to recover it. 00:38:07.529 [2024-12-13 03:49:08.465975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.529 [2024-12-13 03:49:08.465988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.529 qpair failed and we were unable to recover it. 00:38:07.529 [2024-12-13 03:49:08.466234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.529 [2024-12-13 03:49:08.466248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.529 qpair failed and we were unable to recover it. 00:38:07.529 [2024-12-13 03:49:08.466410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.529 [2024-12-13 03:49:08.466423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.529 qpair failed and we were unable to recover it. 00:38:07.529 [2024-12-13 03:49:08.466649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.529 [2024-12-13 03:49:08.466662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.529 qpair failed and we were unable to recover it. 00:38:07.529 [2024-12-13 03:49:08.466726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.529 [2024-12-13 03:49:08.466739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.529 qpair failed and we were unable to recover it. 00:38:07.529 [2024-12-13 03:49:08.466899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.529 [2024-12-13 03:49:08.466913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.529 qpair failed and we were unable to recover it. 00:38:07.529 [2024-12-13 03:49:08.467056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.529 [2024-12-13 03:49:08.467069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.529 qpair failed and we were unable to recover it. 00:38:07.529 [2024-12-13 03:49:08.467281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.529 [2024-12-13 03:49:08.467294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.529 qpair failed and we were unable to recover it. 00:38:07.529 [2024-12-13 03:49:08.467388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.529 [2024-12-13 03:49:08.467401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.530 qpair failed and we were unable to recover it. 00:38:07.530 [2024-12-13 03:49:08.467549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.530 [2024-12-13 03:49:08.467563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.530 qpair failed and we were unable to recover it. 00:38:07.530 [2024-12-13 03:49:08.467715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.530 [2024-12-13 03:49:08.467728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.530 qpair failed and we were unable to recover it. 00:38:07.530 [2024-12-13 03:49:08.467811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.530 [2024-12-13 03:49:08.467824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.530 qpair failed and we were unable to recover it. 00:38:07.530 [2024-12-13 03:49:08.468062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.530 [2024-12-13 03:49:08.468076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.530 qpair failed and we were unable to recover it. 00:38:07.530 [2024-12-13 03:49:08.468169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.530 [2024-12-13 03:49:08.468182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.530 qpair failed and we were unable to recover it. 00:38:07.530 [2024-12-13 03:49:08.468259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.530 [2024-12-13 03:49:08.468272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.530 qpair failed and we were unable to recover it. 00:38:07.530 [2024-12-13 03:49:08.468415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.530 [2024-12-13 03:49:08.468428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.530 qpair failed and we were unable to recover it. 00:38:07.530 [2024-12-13 03:49:08.468540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.530 [2024-12-13 03:49:08.468555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.530 qpair failed and we were unable to recover it. 00:38:07.530 [2024-12-13 03:49:08.468635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.530 [2024-12-13 03:49:08.468654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.530 qpair failed and we were unable to recover it. 00:38:07.530 [2024-12-13 03:49:08.468802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.530 [2024-12-13 03:49:08.468815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.530 qpair failed and we were unable to recover it. 00:38:07.530 [2024-12-13 03:49:08.469005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.530 [2024-12-13 03:49:08.469019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.530 qpair failed and we were unable to recover it. 00:38:07.530 [2024-12-13 03:49:08.469097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.530 [2024-12-13 03:49:08.469110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.530 qpair failed and we were unable to recover it. 00:38:07.530 [2024-12-13 03:49:08.469244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.530 [2024-12-13 03:49:08.469257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.530 qpair failed and we were unable to recover it. 00:38:07.530 [2024-12-13 03:49:08.469406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.530 [2024-12-13 03:49:08.469419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.530 qpair failed and we were unable to recover it. 00:38:07.530 [2024-12-13 03:49:08.469554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.530 [2024-12-13 03:49:08.469567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.530 qpair failed and we were unable to recover it. 00:38:07.530 [2024-12-13 03:49:08.469658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.530 [2024-12-13 03:49:08.469672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.530 qpair failed and we were unable to recover it. 00:38:07.530 [2024-12-13 03:49:08.469821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.530 [2024-12-13 03:49:08.469834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.530 qpair failed and we were unable to recover it. 00:38:07.530 [2024-12-13 03:49:08.469914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.530 [2024-12-13 03:49:08.469932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.530 qpair failed and we were unable to recover it. 00:38:07.530 [2024-12-13 03:49:08.470035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.530 [2024-12-13 03:49:08.470051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.530 qpair failed and we were unable to recover it. 00:38:07.530 [2024-12-13 03:49:08.470195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.530 [2024-12-13 03:49:08.470208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.530 qpair failed and we were unable to recover it. 00:38:07.530 [2024-12-13 03:49:08.470372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.530 [2024-12-13 03:49:08.470385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.530 qpair failed and we were unable to recover it. 00:38:07.530 [2024-12-13 03:49:08.470521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.530 [2024-12-13 03:49:08.470535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.530 qpair failed and we were unable to recover it. 00:38:07.530 [2024-12-13 03:49:08.470620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.530 [2024-12-13 03:49:08.470633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.530 qpair failed and we were unable to recover it. 00:38:07.530 [2024-12-13 03:49:08.470808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.530 [2024-12-13 03:49:08.470822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.530 qpair failed and we were unable to recover it. 00:38:07.530 [2024-12-13 03:49:08.470912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.530 [2024-12-13 03:49:08.470929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.530 qpair failed and we were unable to recover it. 00:38:07.530 [2024-12-13 03:49:08.471009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.530 [2024-12-13 03:49:08.471023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.530 qpair failed and we were unable to recover it. 00:38:07.530 [2024-12-13 03:49:08.471119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.530 [2024-12-13 03:49:08.471131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.530 qpair failed and we were unable to recover it. 00:38:07.530 [2024-12-13 03:49:08.471277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.530 [2024-12-13 03:49:08.471291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.530 qpair failed and we were unable to recover it. 00:38:07.530 [2024-12-13 03:49:08.471497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.530 [2024-12-13 03:49:08.471510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.530 qpair failed and we were unable to recover it. 00:38:07.530 [2024-12-13 03:49:08.471667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.530 [2024-12-13 03:49:08.471681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.530 qpair failed and we were unable to recover it. 00:38:07.530 [2024-12-13 03:49:08.471771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.530 [2024-12-13 03:49:08.471784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.530 qpair failed and we were unable to recover it. 00:38:07.530 [2024-12-13 03:49:08.471876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.530 [2024-12-13 03:49:08.471890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.530 qpair failed and we were unable to recover it. 00:38:07.530 [2024-12-13 03:49:08.472054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.530 [2024-12-13 03:49:08.472068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.530 qpair failed and we were unable to recover it. 00:38:07.530 [2024-12-13 03:49:08.472213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.530 [2024-12-13 03:49:08.472227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.530 qpair failed and we were unable to recover it. 00:38:07.531 [2024-12-13 03:49:08.472308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.531 [2024-12-13 03:49:08.472321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.531 qpair failed and we were unable to recover it. 00:38:07.531 [2024-12-13 03:49:08.472416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.531 [2024-12-13 03:49:08.472429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.531 qpair failed and we were unable to recover it. 00:38:07.531 [2024-12-13 03:49:08.472501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.531 [2024-12-13 03:49:08.472513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.531 qpair failed and we were unable to recover it. 00:38:07.531 [2024-12-13 03:49:08.472660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.531 [2024-12-13 03:49:08.472674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.531 qpair failed and we were unable to recover it. 00:38:07.531 [2024-12-13 03:49:08.472767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.531 [2024-12-13 03:49:08.472780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.531 qpair failed and we were unable to recover it. 00:38:07.531 [2024-12-13 03:49:08.472930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.531 [2024-12-13 03:49:08.472944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.531 qpair failed and we were unable to recover it. 00:38:07.531 [2024-12-13 03:49:08.473126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.531 [2024-12-13 03:49:08.473139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.531 qpair failed and we were unable to recover it. 00:38:07.531 [2024-12-13 03:49:08.473340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.531 [2024-12-13 03:49:08.473353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.531 qpair failed and we were unable to recover it. 00:38:07.531 [2024-12-13 03:49:08.473512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.531 [2024-12-13 03:49:08.473526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.531 qpair failed and we were unable to recover it. 00:38:07.531 [2024-12-13 03:49:08.473673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.531 [2024-12-13 03:49:08.473687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.531 qpair failed and we were unable to recover it. 00:38:07.531 [2024-12-13 03:49:08.473754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.531 [2024-12-13 03:49:08.473767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.531 qpair failed and we were unable to recover it. 00:38:07.531 [2024-12-13 03:49:08.473862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.531 [2024-12-13 03:49:08.473876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.531 qpair failed and we were unable to recover it. 00:38:07.531 [2024-12-13 03:49:08.474040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.531 [2024-12-13 03:49:08.474054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.531 qpair failed and we were unable to recover it. 00:38:07.531 [2024-12-13 03:49:08.474136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.531 [2024-12-13 03:49:08.474149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.531 qpair failed and we were unable to recover it. 00:38:07.531 [2024-12-13 03:49:08.474217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.531 [2024-12-13 03:49:08.474230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.531 qpair failed and we were unable to recover it. 00:38:07.531 [2024-12-13 03:49:08.474400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.531 [2024-12-13 03:49:08.474414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.531 qpair failed and we were unable to recover it. 00:38:07.531 [2024-12-13 03:49:08.474493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.531 [2024-12-13 03:49:08.474507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.531 qpair failed and we were unable to recover it. 00:38:07.531 [2024-12-13 03:49:08.474644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.531 [2024-12-13 03:49:08.474658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.531 qpair failed and we were unable to recover it. 00:38:07.531 [2024-12-13 03:49:08.474740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.531 [2024-12-13 03:49:08.474753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.531 qpair failed and we were unable to recover it. 00:38:07.531 [2024-12-13 03:49:08.474889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.531 [2024-12-13 03:49:08.474902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.531 qpair failed and we were unable to recover it. 00:38:07.531 [2024-12-13 03:49:08.475090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.531 [2024-12-13 03:49:08.475104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.531 qpair failed and we were unable to recover it. 00:38:07.531 [2024-12-13 03:49:08.475196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.531 [2024-12-13 03:49:08.475209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.531 qpair failed and we were unable to recover it. 00:38:07.531 [2024-12-13 03:49:08.475277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.531 [2024-12-13 03:49:08.475290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.531 qpair failed and we were unable to recover it. 00:38:07.531 [2024-12-13 03:49:08.475462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.531 [2024-12-13 03:49:08.475476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.531 qpair failed and we were unable to recover it. 00:38:07.531 [2024-12-13 03:49:08.475624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.531 [2024-12-13 03:49:08.475640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.531 qpair failed and we were unable to recover it. 00:38:07.531 [2024-12-13 03:49:08.475790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.531 [2024-12-13 03:49:08.475804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.531 qpair failed and we were unable to recover it. 00:38:07.531 [2024-12-13 03:49:08.475877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.531 [2024-12-13 03:49:08.475890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.531 qpair failed and we were unable to recover it. 00:38:07.531 [2024-12-13 03:49:08.475977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.531 [2024-12-13 03:49:08.475991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.531 qpair failed and we were unable to recover it. 00:38:07.531 [2024-12-13 03:49:08.476071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.531 [2024-12-13 03:49:08.476085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.531 qpair failed and we were unable to recover it. 00:38:07.531 [2024-12-13 03:49:08.476230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.531 [2024-12-13 03:49:08.476243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.531 qpair failed and we were unable to recover it. 00:38:07.531 [2024-12-13 03:49:08.476387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.531 [2024-12-13 03:49:08.476400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.531 qpair failed and we were unable to recover it. 00:38:07.531 [2024-12-13 03:49:08.476498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.531 [2024-12-13 03:49:08.476512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.531 qpair failed and we were unable to recover it. 00:38:07.531 [2024-12-13 03:49:08.476650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.531 [2024-12-13 03:49:08.476663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.531 qpair failed and we were unable to recover it. 00:38:07.531 [2024-12-13 03:49:08.476754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.531 [2024-12-13 03:49:08.476767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.531 qpair failed and we were unable to recover it. 00:38:07.531 [2024-12-13 03:49:08.476915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.532 [2024-12-13 03:49:08.476934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.532 qpair failed and we were unable to recover it. 00:38:07.532 [2024-12-13 03:49:08.477080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.532 [2024-12-13 03:49:08.477095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.532 qpair failed and we were unable to recover it. 00:38:07.532 [2024-12-13 03:49:08.477249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.532 [2024-12-13 03:49:08.477264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.532 qpair failed and we were unable to recover it. 00:38:07.532 [2024-12-13 03:49:08.477363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.532 [2024-12-13 03:49:08.477378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.532 qpair failed and we were unable to recover it. 00:38:07.532 [2024-12-13 03:49:08.477466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.532 [2024-12-13 03:49:08.477484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.532 qpair failed and we were unable to recover it. 00:38:07.532 [2024-12-13 03:49:08.477715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.532 [2024-12-13 03:49:08.477728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.532 qpair failed and we were unable to recover it. 00:38:07.532 [2024-12-13 03:49:08.477872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.532 [2024-12-13 03:49:08.477886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.532 qpair failed and we were unable to recover it. 00:38:07.532 [2024-12-13 03:49:08.478033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.532 [2024-12-13 03:49:08.478047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.532 qpair failed and we were unable to recover it. 00:38:07.532 [2024-12-13 03:49:08.478192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.532 [2024-12-13 03:49:08.478205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.532 qpair failed and we were unable to recover it. 00:38:07.532 [2024-12-13 03:49:08.478352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.532 [2024-12-13 03:49:08.478366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.532 qpair failed and we were unable to recover it. 00:38:07.532 [2024-12-13 03:49:08.478518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.532 [2024-12-13 03:49:08.478533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.532 qpair failed and we were unable to recover it. 00:38:07.532 [2024-12-13 03:49:08.478604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.532 [2024-12-13 03:49:08.478617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.532 qpair failed and we were unable to recover it. 00:38:07.532 [2024-12-13 03:49:08.478763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.532 [2024-12-13 03:49:08.478777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.532 qpair failed and we were unable to recover it. 00:38:07.532 [2024-12-13 03:49:08.478849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.532 [2024-12-13 03:49:08.478862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.532 qpair failed and we were unable to recover it. 00:38:07.532 [2024-12-13 03:49:08.479084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.532 [2024-12-13 03:49:08.479098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.532 qpair failed and we were unable to recover it. 00:38:07.532 [2024-12-13 03:49:08.479267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.532 [2024-12-13 03:49:08.479281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.532 qpair failed and we were unable to recover it. 00:38:07.532 [2024-12-13 03:49:08.479372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.532 [2024-12-13 03:49:08.479385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.532 qpair failed and we were unable to recover it. 00:38:07.532 [2024-12-13 03:49:08.479617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.532 [2024-12-13 03:49:08.479631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.532 qpair failed and we were unable to recover it. 00:38:07.532 [2024-12-13 03:49:08.479771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.532 [2024-12-13 03:49:08.479784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.532 qpair failed and we were unable to recover it. 00:38:07.532 [2024-12-13 03:49:08.479883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.532 [2024-12-13 03:49:08.479897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.532 qpair failed and we were unable to recover it. 00:38:07.532 [2024-12-13 03:49:08.479979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.532 [2024-12-13 03:49:08.479993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.532 qpair failed and we were unable to recover it. 00:38:07.532 [2024-12-13 03:49:08.480083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.532 [2024-12-13 03:49:08.480097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.532 qpair failed and we were unable to recover it. 00:38:07.532 [2024-12-13 03:49:08.480162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.532 [2024-12-13 03:49:08.480174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.532 qpair failed and we were unable to recover it. 00:38:07.532 [2024-12-13 03:49:08.480309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.532 [2024-12-13 03:49:08.480324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.532 qpair failed and we were unable to recover it. 00:38:07.532 [2024-12-13 03:49:08.480409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.532 [2024-12-13 03:49:08.480423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.532 qpair failed and we were unable to recover it. 00:38:07.532 [2024-12-13 03:49:08.480576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.532 [2024-12-13 03:49:08.480589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.532 qpair failed and we were unable to recover it. 00:38:07.532 [2024-12-13 03:49:08.480723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.532 [2024-12-13 03:49:08.480736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.532 qpair failed and we were unable to recover it. 00:38:07.532 [2024-12-13 03:49:08.480822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.532 [2024-12-13 03:49:08.480835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.532 qpair failed and we were unable to recover it. 00:38:07.532 [2024-12-13 03:49:08.481083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.532 [2024-12-13 03:49:08.481097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.532 qpair failed and we were unable to recover it. 00:38:07.532 [2024-12-13 03:49:08.481242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.532 [2024-12-13 03:49:08.481256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.532 qpair failed and we were unable to recover it. 00:38:07.532 [2024-12-13 03:49:08.481467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.532 [2024-12-13 03:49:08.481484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.532 qpair failed and we were unable to recover it. 00:38:07.532 [2024-12-13 03:49:08.481571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.532 [2024-12-13 03:49:08.481585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.532 qpair failed and we were unable to recover it. 00:38:07.532 [2024-12-13 03:49:08.481734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.532 [2024-12-13 03:49:08.481748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.532 qpair failed and we were unable to recover it. 00:38:07.532 [2024-12-13 03:49:08.481899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.532 [2024-12-13 03:49:08.481913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.532 qpair failed and we were unable to recover it. 00:38:07.532 [2024-12-13 03:49:08.482001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.532 [2024-12-13 03:49:08.482015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.532 qpair failed and we were unable to recover it. 00:38:07.532 [2024-12-13 03:49:08.482189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.532 [2024-12-13 03:49:08.482203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.532 qpair failed and we were unable to recover it. 00:38:07.532 [2024-12-13 03:49:08.482338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.532 [2024-12-13 03:49:08.482352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.532 qpair failed and we were unable to recover it. 00:38:07.532 [2024-12-13 03:49:08.482432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.533 [2024-12-13 03:49:08.482445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.533 qpair failed and we were unable to recover it. 00:38:07.533 [2024-12-13 03:49:08.482649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.533 [2024-12-13 03:49:08.482662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.533 qpair failed and we were unable to recover it. 00:38:07.533 [2024-12-13 03:49:08.482730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.533 [2024-12-13 03:49:08.482744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.533 qpair failed and we were unable to recover it. 00:38:07.533 [2024-12-13 03:49:08.482888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.533 [2024-12-13 03:49:08.482902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.533 qpair failed and we were unable to recover it. 00:38:07.533 [2024-12-13 03:49:08.483091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.533 [2024-12-13 03:49:08.483105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.533 qpair failed and we were unable to recover it. 00:38:07.533 [2024-12-13 03:49:08.483131] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:07.533 [2024-12-13 03:49:08.483164] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:07.533 [2024-12-13 03:49:08.483177] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:07.533 [2024-12-13 03:49:08.483187] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:07.533 [2024-12-13 03:49:08.483198] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:07.533 [2024-12-13 03:49:08.483320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.533 [2024-12-13 03:49:08.483333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.533 qpair failed and we were unable to recover it. 00:38:07.533 [2024-12-13 03:49:08.483536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.533 [2024-12-13 03:49:08.483550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.533 qpair failed and we were unable to recover it. 00:38:07.533 [2024-12-13 03:49:08.483698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.533 [2024-12-13 03:49:08.483712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.533 qpair failed and we were unable to recover it. 00:38:07.533 [2024-12-13 03:49:08.483780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.533 [2024-12-13 03:49:08.483799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.533 qpair failed and we were unable to recover it. 00:38:07.533 [2024-12-13 03:49:08.483886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.533 [2024-12-13 03:49:08.483899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.533 qpair failed and we were unable to recover it. 00:38:07.533 [2024-12-13 03:49:08.484071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.533 [2024-12-13 03:49:08.484085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.533 qpair failed and we were unable to recover it. 00:38:07.533 [2024-12-13 03:49:08.484181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.533 [2024-12-13 03:49:08.484195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.533 qpair failed and we were unable to recover it. 00:38:07.533 [2024-12-13 03:49:08.484342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.533 [2024-12-13 03:49:08.484355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.533 qpair failed and we were unable to recover it. 00:38:07.533 [2024-12-13 03:49:08.484434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.533 [2024-12-13 03:49:08.484447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.533 qpair failed and we were unable to recover it. 00:38:07.533 [2024-12-13 03:49:08.484648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.533 [2024-12-13 03:49:08.484661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.533 qpair failed and we were unable to recover it. 00:38:07.533 [2024-12-13 03:49:08.484828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.533 [2024-12-13 03:49:08.484842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.533 qpair failed and we were unable to recover it. 00:38:07.533 [2024-12-13 03:49:08.484927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.533 [2024-12-13 03:49:08.484941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.533 qpair failed and we were unable to recover it. 00:38:07.533 [2024-12-13 03:49:08.485087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.533 [2024-12-13 03:49:08.485101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.533 qpair failed and we were unable to recover it. 00:38:07.533 [2024-12-13 03:49:08.485198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.533 [2024-12-13 03:49:08.485212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.533 qpair failed and we were unable to recover it. 00:38:07.533 [2024-12-13 03:49:08.485347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.533 [2024-12-13 03:49:08.485360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.533 qpair failed and we were unable to recover it. 00:38:07.533 [2024-12-13 03:49:08.485426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.533 [2024-12-13 03:49:08.485440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.533 qpair failed and we were unable to recover it. 00:38:07.533 [2024-12-13 03:49:08.485523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.533 [2024-12-13 03:49:08.485536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.533 qpair failed and we were unable to recover it. 00:38:07.533 [2024-12-13 03:49:08.485637] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:38:07.533 [2024-12-13 03:49:08.485708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.533 [2024-12-13 03:49:08.485722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.533 qpair failed and we were unable to recover it. 00:38:07.533 [2024-12-13 03:49:08.485727] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:38:07.533 [2024-12-13 03:49:08.485792] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:38:07.533 [2024-12-13 03:49:08.485861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.533 [2024-12-13 03:49:08.485815] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:38:07.533 [2024-12-13 03:49:08.485875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.533 qpair failed and we were unable to recover it. 00:38:07.533 [2024-12-13 03:49:08.486020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.533 [2024-12-13 03:49:08.486033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.533 qpair failed and we were unable to recover it. 00:38:07.533 [2024-12-13 03:49:08.486105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.533 [2024-12-13 03:49:08.486118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.533 qpair failed and we were unable to recover it. 00:38:07.533 [2024-12-13 03:49:08.486263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.533 [2024-12-13 03:49:08.486277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.533 qpair failed and we were unable to recover it. 00:38:07.533 [2024-12-13 03:49:08.486476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.533 [2024-12-13 03:49:08.486490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.533 qpair failed and we were unable to recover it. 00:38:07.533 [2024-12-13 03:49:08.486566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.533 [2024-12-13 03:49:08.486579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.533 qpair failed and we were unable to recover it. 00:38:07.533 [2024-12-13 03:49:08.486728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.533 [2024-12-13 03:49:08.486742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.533 qpair failed and we were unable to recover it. 00:38:07.533 [2024-12-13 03:49:08.486834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.533 [2024-12-13 03:49:08.486852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.533 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.486941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.486960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.487131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.487145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.487294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.487307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.487388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.487401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.487471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.487484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.487638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.487652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.487726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.487739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.487806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.487819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.487963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.487978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.488225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.488239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.488393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.488407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.488627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.488641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.488740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.488753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.488984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.488998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.489210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.489224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.489362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.489376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.489476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.489490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.489577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.489591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.489728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.489742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.489823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.489836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.489937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.489951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.490111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.490125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.490280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.490293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.490439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.490452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.490603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.490617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.490690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.490704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.490777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.490791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.490868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.490881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.491013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.491026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.491130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.491144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.491352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.491367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.491571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.491585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.491671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.491684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.491840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.491858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.492008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.492022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.492235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.492252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.492428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.492442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.492589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.492603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.534 [2024-12-13 03:49:08.492690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.534 [2024-12-13 03:49:08.492703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.534 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.492784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.492800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.492895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.492909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.492999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.493013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.493217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.493232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.493377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.493391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.493537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.493552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.493639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.493654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.493788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.493804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.493983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.493997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.494221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.494236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.494329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.494342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.494482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.494496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.494647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.494660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.494862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.494876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.495017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.495033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.495122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.495136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.495272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.495286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.495430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.495444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.495522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.495536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.495637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.495651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.495746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.495762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.495856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.495871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.496004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.496019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.496159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.496173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.496325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.496340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.496414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.496433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.496513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.496527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.496683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.496696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.496843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.496856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.496990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.497005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.497095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.497109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.497193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.497206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.497293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.497307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.497380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.497393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.497532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.497546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.497644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.497658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.497812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.497827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.497898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.497911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.535 qpair failed and we were unable to recover it. 00:38:07.535 [2024-12-13 03:49:08.497991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.535 [2024-12-13 03:49:08.498006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.498104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.498117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.498250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.498266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.498402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.498417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.498497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.498511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.498592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.498606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.498810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.498823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.498959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.498973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.499061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.499075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.499225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.499238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.499466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.499480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.499658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.499671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.499808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.499821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.499966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.499979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.500065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.500079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.500236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.500249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.500482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.500496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.500584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.500598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.500806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.500821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.500970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.500984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.501136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.501151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.501231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.501244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.501399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.501414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.501490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.501503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.501592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.501605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.501739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.501753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.501895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.501909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.502077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.502090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.502166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.502180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.502327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.502340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.502492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.502506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.502650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.502662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.502807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.502821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.502906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.502928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.503078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.503091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.503344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.503358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.503557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.503570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.503783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.503796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.503951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.536 [2024-12-13 03:49:08.503966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.536 qpair failed and we were unable to recover it. 00:38:07.536 [2024-12-13 03:49:08.504156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.537 [2024-12-13 03:49:08.504169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.537 qpair failed and we were unable to recover it. 00:38:07.537 [2024-12-13 03:49:08.504324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.537 [2024-12-13 03:49:08.504337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.537 qpair failed and we were unable to recover it. 00:38:07.537 [2024-12-13 03:49:08.504505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.537 [2024-12-13 03:49:08.504519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.537 qpair failed and we were unable to recover it. 00:38:07.537 [2024-12-13 03:49:08.504598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.537 [2024-12-13 03:49:08.504613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.537 qpair failed and we were unable to recover it. 00:38:07.537 [2024-12-13 03:49:08.504708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.537 [2024-12-13 03:49:08.504721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.537 qpair failed and we were unable to recover it. 00:38:07.537 [2024-12-13 03:49:08.504867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.537 [2024-12-13 03:49:08.504880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.537 qpair failed and we were unable to recover it. 00:38:07.537 [2024-12-13 03:49:08.505013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.537 [2024-12-13 03:49:08.505027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.537 qpair failed and we were unable to recover it. 00:38:07.537 [2024-12-13 03:49:08.505101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.537 [2024-12-13 03:49:08.505114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.537 qpair failed and we were unable to recover it. 00:38:07.537 [2024-12-13 03:49:08.505266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.537 [2024-12-13 03:49:08.505279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.537 qpair failed and we were unable to recover it. 00:38:07.537 [2024-12-13 03:49:08.505412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.537 [2024-12-13 03:49:08.505425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.537 qpair failed and we were unable to recover it. 00:38:07.537 [2024-12-13 03:49:08.505507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.537 [2024-12-13 03:49:08.505520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.537 qpair failed and we were unable to recover it. 00:38:07.537 [2024-12-13 03:49:08.505705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.537 [2024-12-13 03:49:08.505721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.537 qpair failed and we were unable to recover it. 00:38:07.537 [2024-12-13 03:49:08.505803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.537 [2024-12-13 03:49:08.505821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.537 qpair failed and we were unable to recover it. 00:38:07.537 [2024-12-13 03:49:08.505900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.537 [2024-12-13 03:49:08.505913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.537 qpair failed and we were unable to recover it. 00:38:07.537 [2024-12-13 03:49:08.505999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.537 [2024-12-13 03:49:08.506013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.537 qpair failed and we were unable to recover it. 00:38:07.537 [2024-12-13 03:49:08.506098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.537 [2024-12-13 03:49:08.506111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.537 qpair failed and we were unable to recover it. 00:38:07.537 [2024-12-13 03:49:08.506264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.537 [2024-12-13 03:49:08.506277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.537 qpair failed and we were unable to recover it. 00:38:07.537 [2024-12-13 03:49:08.506367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.537 [2024-12-13 03:49:08.506380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.537 qpair failed and we were unable to recover it. 00:38:07.537 [2024-12-13 03:49:08.506531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.537 [2024-12-13 03:49:08.506545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.537 qpair failed and we were unable to recover it. 00:38:07.537 [2024-12-13 03:49:08.506621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.537 [2024-12-13 03:49:08.506634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.537 qpair failed and we were unable to recover it. 00:38:07.537 [2024-12-13 03:49:08.506781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.537 [2024-12-13 03:49:08.506794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.537 qpair failed and we were unable to recover it. 00:38:07.537 [2024-12-13 03:49:08.506874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.537 [2024-12-13 03:49:08.506887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.537 qpair failed and we were unable to recover it. 00:38:07.537 [2024-12-13 03:49:08.506966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.537 [2024-12-13 03:49:08.506980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.537 qpair failed and we were unable to recover it. 00:38:07.537 [2024-12-13 03:49:08.507122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.537 [2024-12-13 03:49:08.507137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.537 qpair failed and we were unable to recover it. 00:38:07.537 [2024-12-13 03:49:08.507280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.537 [2024-12-13 03:49:08.507293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.537 qpair failed and we were unable to recover it. 00:38:07.537 [2024-12-13 03:49:08.507443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.537 [2024-12-13 03:49:08.507457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.537 qpair failed and we were unable to recover it. 00:38:07.537 [2024-12-13 03:49:08.507539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.537 [2024-12-13 03:49:08.507552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.537 qpair failed and we were unable to recover it. 00:38:07.537 [2024-12-13 03:49:08.507615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.537 [2024-12-13 03:49:08.507628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.537 qpair failed and we were unable to recover it. 00:38:07.537 [2024-12-13 03:49:08.507812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.537 [2024-12-13 03:49:08.507825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.537 qpair failed and we were unable to recover it. 00:38:07.537 [2024-12-13 03:49:08.507987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.537 [2024-12-13 03:49:08.508000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.537 qpair failed and we were unable to recover it. 00:38:07.537 [2024-12-13 03:49:08.508209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.537 [2024-12-13 03:49:08.508223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.537 qpair failed and we were unable to recover it. 00:38:07.537 [2024-12-13 03:49:08.508304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.537 [2024-12-13 03:49:08.508317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.537 qpair failed and we were unable to recover it. 00:38:07.537 [2024-12-13 03:49:08.508491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.537 [2024-12-13 03:49:08.508505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.537 qpair failed and we were unable to recover it. 00:38:07.537 [2024-12-13 03:49:08.508594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.537 [2024-12-13 03:49:08.508608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.537 qpair failed and we were unable to recover it. 00:38:07.537 [2024-12-13 03:49:08.508805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.537 [2024-12-13 03:49:08.508818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.537 qpair failed and we were unable to recover it. 00:38:07.537 [2024-12-13 03:49:08.508968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.538 [2024-12-13 03:49:08.508981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.538 qpair failed and we were unable to recover it. 00:38:07.538 [2024-12-13 03:49:08.509129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.538 [2024-12-13 03:49:08.509142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.538 qpair failed and we were unable to recover it. 00:38:07.538 [2024-12-13 03:49:08.509230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.538 [2024-12-13 03:49:08.509243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.538 qpair failed and we were unable to recover it. 00:38:07.538 [2024-12-13 03:49:08.509416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.538 [2024-12-13 03:49:08.509429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.538 qpair failed and we were unable to recover it. 00:38:07.538 [2024-12-13 03:49:08.509502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.538 [2024-12-13 03:49:08.509515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.538 qpair failed and we were unable to recover it. 00:38:07.538 [2024-12-13 03:49:08.509585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.538 [2024-12-13 03:49:08.509598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.538 qpair failed and we were unable to recover it. 00:38:07.538 [2024-12-13 03:49:08.509689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.538 [2024-12-13 03:49:08.509703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.538 qpair failed and we were unable to recover it. 00:38:07.538 [2024-12-13 03:49:08.509795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.538 [2024-12-13 03:49:08.509808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.538 qpair failed and we were unable to recover it. 00:38:07.538 [2024-12-13 03:49:08.509902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.538 [2024-12-13 03:49:08.509923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.538 qpair failed and we were unable to recover it. 00:38:07.538 [2024-12-13 03:49:08.510062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.538 [2024-12-13 03:49:08.510075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.538 qpair failed and we were unable to recover it. 00:38:07.538 [2024-12-13 03:49:08.510221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.538 [2024-12-13 03:49:08.510239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.538 qpair failed and we were unable to recover it. 00:38:07.538 [2024-12-13 03:49:08.510325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.538 [2024-12-13 03:49:08.510338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.538 qpair failed and we were unable to recover it. 00:38:07.538 [2024-12-13 03:49:08.510475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.538 [2024-12-13 03:49:08.510489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.538 qpair failed and we were unable to recover it. 00:38:07.538 [2024-12-13 03:49:08.510631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.538 [2024-12-13 03:49:08.510645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.538 qpair failed and we were unable to recover it. 00:38:07.538 [2024-12-13 03:49:08.510733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.538 [2024-12-13 03:49:08.510746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.538 qpair failed and we were unable to recover it. 00:38:07.538 [2024-12-13 03:49:08.510884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.538 [2024-12-13 03:49:08.510897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.538 qpair failed and we were unable to recover it. 00:38:07.538 [2024-12-13 03:49:08.510975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.538 [2024-12-13 03:49:08.510989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.538 qpair failed and we were unable to recover it. 00:38:07.538 [2024-12-13 03:49:08.511146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.538 [2024-12-13 03:49:08.511159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.538 qpair failed and we were unable to recover it. 00:38:07.538 [2024-12-13 03:49:08.511239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.538 [2024-12-13 03:49:08.511253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.538 qpair failed and we were unable to recover it. 00:38:07.538 [2024-12-13 03:49:08.511404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.538 [2024-12-13 03:49:08.511417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.538 qpair failed and we were unable to recover it. 00:38:07.538 [2024-12-13 03:49:08.511625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.538 [2024-12-13 03:49:08.511638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.538 qpair failed and we were unable to recover it. 00:38:07.538 [2024-12-13 03:49:08.511730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.538 [2024-12-13 03:49:08.511744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.538 qpair failed and we were unable to recover it. 00:38:07.538 [2024-12-13 03:49:08.511825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.538 [2024-12-13 03:49:08.511838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.538 qpair failed and we were unable to recover it. 00:38:07.538 [2024-12-13 03:49:08.511902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.538 [2024-12-13 03:49:08.511914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.538 qpair failed and we were unable to recover it. 00:38:07.538 [2024-12-13 03:49:08.512094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.538 [2024-12-13 03:49:08.512108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.538 qpair failed and we were unable to recover it. 00:38:07.538 [2024-12-13 03:49:08.512253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.538 [2024-12-13 03:49:08.512266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.538 qpair failed and we were unable to recover it. 00:38:07.538 [2024-12-13 03:49:08.512408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.538 [2024-12-13 03:49:08.512421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.538 qpair failed and we were unable to recover it. 00:38:07.538 [2024-12-13 03:49:08.512500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.538 [2024-12-13 03:49:08.512514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.538 qpair failed and we were unable to recover it. 00:38:07.538 [2024-12-13 03:49:08.512660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.538 [2024-12-13 03:49:08.512673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.538 qpair failed and we were unable to recover it. 00:38:07.538 [2024-12-13 03:49:08.512764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.538 [2024-12-13 03:49:08.512777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.538 qpair failed and we were unable to recover it. 00:38:07.538 [2024-12-13 03:49:08.512912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.538 [2024-12-13 03:49:08.512930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.538 qpair failed and we were unable to recover it. 00:38:07.538 [2024-12-13 03:49:08.513020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.538 [2024-12-13 03:49:08.513034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.538 qpair failed and we were unable to recover it. 00:38:07.538 [2024-12-13 03:49:08.513187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.538 [2024-12-13 03:49:08.513199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.538 qpair failed and we were unable to recover it. 00:38:07.538 [2024-12-13 03:49:08.513291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.538 [2024-12-13 03:49:08.513305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.538 qpair failed and we were unable to recover it. 00:38:07.538 [2024-12-13 03:49:08.513376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.538 [2024-12-13 03:49:08.513389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.538 qpair failed and we were unable to recover it. 00:38:07.539 [2024-12-13 03:49:08.513460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.539 [2024-12-13 03:49:08.513473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.539 qpair failed and we were unable to recover it. 00:38:07.539 [2024-12-13 03:49:08.513607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.539 [2024-12-13 03:49:08.513621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.539 qpair failed and we were unable to recover it. 00:38:07.539 [2024-12-13 03:49:08.513690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.539 [2024-12-13 03:49:08.513703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.539 qpair failed and we were unable to recover it. 00:38:07.539 [2024-12-13 03:49:08.513831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.539 [2024-12-13 03:49:08.513844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.539 qpair failed and we were unable to recover it. 00:38:07.539 [2024-12-13 03:49:08.513999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.539 [2024-12-13 03:49:08.514013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.539 qpair failed and we were unable to recover it. 00:38:07.539 [2024-12-13 03:49:08.514190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.539 [2024-12-13 03:49:08.514206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.539 qpair failed and we were unable to recover it. 00:38:07.539 [2024-12-13 03:49:08.514433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.539 [2024-12-13 03:49:08.514452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.539 qpair failed and we were unable to recover it. 00:38:07.539 [2024-12-13 03:49:08.514523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.539 [2024-12-13 03:49:08.514537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.539 qpair failed and we were unable to recover it. 00:38:07.539 [2024-12-13 03:49:08.514684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.539 [2024-12-13 03:49:08.514698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.539 qpair failed and we were unable to recover it. 00:38:07.539 [2024-12-13 03:49:08.514851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.539 [2024-12-13 03:49:08.514864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.539 qpair failed and we were unable to recover it. 00:38:07.539 [2024-12-13 03:49:08.514938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.539 [2024-12-13 03:49:08.514951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.539 qpair failed and we were unable to recover it. 00:38:07.539 [2024-12-13 03:49:08.515021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.539 [2024-12-13 03:49:08.515044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.539 qpair failed and we were unable to recover it. 00:38:07.539 [2024-12-13 03:49:08.515119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.539 [2024-12-13 03:49:08.515134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.539 qpair failed and we were unable to recover it. 00:38:07.539 [2024-12-13 03:49:08.515289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.539 [2024-12-13 03:49:08.515305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.539 qpair failed and we were unable to recover it. 00:38:07.539 [2024-12-13 03:49:08.515387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.539 [2024-12-13 03:49:08.515401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.539 qpair failed and we were unable to recover it. 00:38:07.539 [2024-12-13 03:49:08.515499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.539 [2024-12-13 03:49:08.515512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.539 qpair failed and we were unable to recover it. 00:38:07.539 [2024-12-13 03:49:08.515589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.539 [2024-12-13 03:49:08.515602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.539 qpair failed and we were unable to recover it. 00:38:07.539 [2024-12-13 03:49:08.515688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.539 [2024-12-13 03:49:08.515701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.539 qpair failed and we were unable to recover it. 00:38:07.539 [2024-12-13 03:49:08.515832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.539 [2024-12-13 03:49:08.515845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.539 qpair failed and we were unable to recover it. 00:38:07.539 [2024-12-13 03:49:08.515935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.539 [2024-12-13 03:49:08.515948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.539 qpair failed and we were unable to recover it. 00:38:07.539 [2024-12-13 03:49:08.516101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.539 [2024-12-13 03:49:08.516114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.539 qpair failed and we were unable to recover it. 00:38:07.539 [2024-12-13 03:49:08.516252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.539 [2024-12-13 03:49:08.516264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.539 qpair failed and we were unable to recover it. 00:38:07.539 [2024-12-13 03:49:08.516330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.539 [2024-12-13 03:49:08.516343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.539 qpair failed and we were unable to recover it. 00:38:07.539 [2024-12-13 03:49:08.516491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.539 [2024-12-13 03:49:08.516504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.539 qpair failed and we were unable to recover it. 00:38:07.539 [2024-12-13 03:49:08.516665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.539 [2024-12-13 03:49:08.516679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.539 qpair failed and we were unable to recover it. 00:38:07.539 [2024-12-13 03:49:08.516843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.539 [2024-12-13 03:49:08.516856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.539 qpair failed and we were unable to recover it. 00:38:07.539 [2024-12-13 03:49:08.517056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.539 [2024-12-13 03:49:08.517071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.539 qpair failed and we were unable to recover it. 00:38:07.539 [2024-12-13 03:49:08.517218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.539 [2024-12-13 03:49:08.517232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.539 qpair failed and we were unable to recover it. 00:38:07.539 [2024-12-13 03:49:08.517432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.539 [2024-12-13 03:49:08.517445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.539 qpair failed and we were unable to recover it. 00:38:07.539 [2024-12-13 03:49:08.517595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.539 [2024-12-13 03:49:08.517608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.539 qpair failed and we were unable to recover it. 00:38:07.539 [2024-12-13 03:49:08.517758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.539 [2024-12-13 03:49:08.517771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.539 qpair failed and we were unable to recover it. 00:38:07.539 [2024-12-13 03:49:08.517908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.539 [2024-12-13 03:49:08.517928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.539 qpair failed and we were unable to recover it. 00:38:07.539 [2024-12-13 03:49:08.518104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.539 [2024-12-13 03:49:08.518117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.539 qpair failed and we were unable to recover it. 00:38:07.539 [2024-12-13 03:49:08.518358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.539 [2024-12-13 03:49:08.518372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.539 qpair failed and we were unable to recover it. 00:38:07.539 [2024-12-13 03:49:08.518442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.540 [2024-12-13 03:49:08.518455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.540 qpair failed and we were unable to recover it. 00:38:07.540 [2024-12-13 03:49:08.518522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.540 [2024-12-13 03:49:08.518535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.540 qpair failed and we were unable to recover it. 00:38:07.540 [2024-12-13 03:49:08.518753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.540 [2024-12-13 03:49:08.518767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.540 qpair failed and we were unable to recover it. 00:38:07.540 [2024-12-13 03:49:08.518911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.540 [2024-12-13 03:49:08.518935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.540 qpair failed and we were unable to recover it. 00:38:07.540 [2024-12-13 03:49:08.519032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.540 [2024-12-13 03:49:08.519044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.540 qpair failed and we were unable to recover it. 00:38:07.540 [2024-12-13 03:49:08.519184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.540 [2024-12-13 03:49:08.519198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.540 qpair failed and we were unable to recover it. 00:38:07.540 [2024-12-13 03:49:08.519338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.540 [2024-12-13 03:49:08.519372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.540 qpair failed and we were unable to recover it. 00:38:07.540 [2024-12-13 03:49:08.519561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.540 [2024-12-13 03:49:08.519591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.540 qpair failed and we were unable to recover it. 00:38:07.540 [2024-12-13 03:49:08.519760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.540 [2024-12-13 03:49:08.519782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.540 qpair failed and we were unable to recover it. 00:38:07.540 [2024-12-13 03:49:08.519944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.540 [2024-12-13 03:49:08.519966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.540 qpair failed and we were unable to recover it. 00:38:07.540 [2024-12-13 03:49:08.520142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.540 [2024-12-13 03:49:08.520163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.540 qpair failed and we were unable to recover it. 00:38:07.540 [2024-12-13 03:49:08.520259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.540 [2024-12-13 03:49:08.520280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.540 qpair failed and we were unable to recover it. 00:38:07.540 [2024-12-13 03:49:08.520567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.540 [2024-12-13 03:49:08.520592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.540 qpair failed and we were unable to recover it. 00:38:07.540 [2024-12-13 03:49:08.520824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.540 [2024-12-13 03:49:08.520845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.540 qpair failed and we were unable to recover it. 00:38:07.540 [2024-12-13 03:49:08.521058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.540 [2024-12-13 03:49:08.521080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.540 qpair failed and we were unable to recover it. 00:38:07.540 [2024-12-13 03:49:08.521167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.540 [2024-12-13 03:49:08.521188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.540 qpair failed and we were unable to recover it. 00:38:07.540 [2024-12-13 03:49:08.521368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.540 [2024-12-13 03:49:08.521389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.540 qpair failed and we were unable to recover it. 00:38:07.540 [2024-12-13 03:49:08.521544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.540 [2024-12-13 03:49:08.521564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.540 qpair failed and we were unable to recover it. 00:38:07.540 [2024-12-13 03:49:08.521658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.540 [2024-12-13 03:49:08.521678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.540 qpair failed and we were unable to recover it. 00:38:07.540 [2024-12-13 03:49:08.521791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.540 [2024-12-13 03:49:08.521816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.540 qpair failed and we were unable to recover it. 00:38:07.540 [2024-12-13 03:49:08.522035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.540 [2024-12-13 03:49:08.522057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.540 qpair failed and we were unable to recover it. 00:38:07.540 [2024-12-13 03:49:08.522269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.540 [2024-12-13 03:49:08.522290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.540 qpair failed and we were unable to recover it. 00:38:07.540 [2024-12-13 03:49:08.522450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.540 [2024-12-13 03:49:08.522466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.540 qpair failed and we were unable to recover it. 00:38:07.540 [2024-12-13 03:49:08.522622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.540 [2024-12-13 03:49:08.522636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.540 qpair failed and we were unable to recover it. 00:38:07.540 [2024-12-13 03:49:08.522778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.540 [2024-12-13 03:49:08.522791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.540 qpair failed and we were unable to recover it. 00:38:07.540 [2024-12-13 03:49:08.523011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.540 [2024-12-13 03:49:08.523026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.540 qpair failed and we were unable to recover it. 00:38:07.540 [2024-12-13 03:49:08.523163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.540 [2024-12-13 03:49:08.523177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.540 qpair failed and we were unable to recover it. 00:38:07.540 [2024-12-13 03:49:08.523328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.540 [2024-12-13 03:49:08.523341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.540 qpair failed and we were unable to recover it. 00:38:07.540 [2024-12-13 03:49:08.523493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.540 [2024-12-13 03:49:08.523506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.540 qpair failed and we were unable to recover it. 00:38:07.540 [2024-12-13 03:49:08.523709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.540 [2024-12-13 03:49:08.523723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.540 qpair failed and we were unable to recover it. 00:38:07.540 [2024-12-13 03:49:08.523813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.540 [2024-12-13 03:49:08.523826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.540 qpair failed and we were unable to recover it. 00:38:07.540 [2024-12-13 03:49:08.523904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.540 [2024-12-13 03:49:08.523922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.540 qpair failed and we were unable to recover it. 00:38:07.540 [2024-12-13 03:49:08.524007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.540 [2024-12-13 03:49:08.524020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.540 qpair failed and we were unable to recover it. 00:38:07.540 [2024-12-13 03:49:08.524254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.540 [2024-12-13 03:49:08.524268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.540 qpair failed and we were unable to recover it. 00:38:07.540 [2024-12-13 03:49:08.524467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.541 [2024-12-13 03:49:08.524480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.541 qpair failed and we were unable to recover it. 00:38:07.541 [2024-12-13 03:49:08.524615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.541 [2024-12-13 03:49:08.524628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.541 qpair failed and we were unable to recover it. 00:38:07.541 [2024-12-13 03:49:08.524763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.541 [2024-12-13 03:49:08.524776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.541 qpair failed and we were unable to recover it. 00:38:07.541 [2024-12-13 03:49:08.524930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.541 [2024-12-13 03:49:08.524944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.541 qpair failed and we were unable to recover it. 00:38:07.541 [2024-12-13 03:49:08.525041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.541 [2024-12-13 03:49:08.525054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.541 qpair failed and we were unable to recover it. 00:38:07.541 [2024-12-13 03:49:08.525205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.541 [2024-12-13 03:49:08.525218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.541 qpair failed and we were unable to recover it. 00:38:07.541 [2024-12-13 03:49:08.525285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.541 [2024-12-13 03:49:08.525298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.541 qpair failed and we were unable to recover it. 00:38:07.541 [2024-12-13 03:49:08.525397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.541 [2024-12-13 03:49:08.525409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.541 qpair failed and we were unable to recover it. 00:38:07.541 [2024-12-13 03:49:08.525628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.541 [2024-12-13 03:49:08.525641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.541 qpair failed and we were unable to recover it. 00:38:07.541 [2024-12-13 03:49:08.525788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.541 [2024-12-13 03:49:08.525802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.541 qpair failed and we were unable to recover it. 00:38:07.541 [2024-12-13 03:49:08.525956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.541 [2024-12-13 03:49:08.525970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.541 qpair failed and we were unable to recover it. 00:38:07.541 [2024-12-13 03:49:08.526088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.541 [2024-12-13 03:49:08.526102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.541 qpair failed and we were unable to recover it. 00:38:07.541 [2024-12-13 03:49:08.526346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.541 [2024-12-13 03:49:08.526370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.541 qpair failed and we were unable to recover it. 00:38:07.541 [2024-12-13 03:49:08.526522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.541 [2024-12-13 03:49:08.526543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.541 qpair failed and we were unable to recover it. 00:38:07.541 [2024-12-13 03:49:08.526696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.541 [2024-12-13 03:49:08.526717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.541 qpair failed and we were unable to recover it. 00:38:07.541 [2024-12-13 03:49:08.526888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.541 [2024-12-13 03:49:08.526909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.541 qpair failed and we were unable to recover it. 00:38:07.541 [2024-12-13 03:49:08.527011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.541 [2024-12-13 03:49:08.527031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.541 qpair failed and we were unable to recover it. 00:38:07.541 [2024-12-13 03:49:08.527199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.541 [2024-12-13 03:49:08.527220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.541 qpair failed and we were unable to recover it. 00:38:07.541 [2024-12-13 03:49:08.527387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.541 [2024-12-13 03:49:08.527408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.541 qpair failed and we were unable to recover it. 00:38:07.541 [2024-12-13 03:49:08.527624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.541 [2024-12-13 03:49:08.527644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.541 qpair failed and we were unable to recover it. 00:38:07.541 [2024-12-13 03:49:08.527739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.541 [2024-12-13 03:49:08.527754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.541 qpair failed and we were unable to recover it. 00:38:07.541 [2024-12-13 03:49:08.527899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.541 [2024-12-13 03:49:08.527913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.541 qpair failed and we were unable to recover it. 00:38:07.541 [2024-12-13 03:49:08.528069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.541 [2024-12-13 03:49:08.528082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.541 qpair failed and we were unable to recover it. 00:38:07.541 [2024-12-13 03:49:08.528331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.541 [2024-12-13 03:49:08.528344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.541 qpair failed and we were unable to recover it. 00:38:07.541 [2024-12-13 03:49:08.528478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.541 [2024-12-13 03:49:08.528491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.541 qpair failed and we were unable to recover it. 00:38:07.541 [2024-12-13 03:49:08.528564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.541 [2024-12-13 03:49:08.528580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.541 qpair failed and we were unable to recover it. 00:38:07.541 [2024-12-13 03:49:08.528783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.541 [2024-12-13 03:49:08.528804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.541 qpair failed and we were unable to recover it. 00:38:07.541 [2024-12-13 03:49:08.528877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.541 [2024-12-13 03:49:08.528890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.541 qpair failed and we were unable to recover it. 00:38:07.541 [2024-12-13 03:49:08.529059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.541 [2024-12-13 03:49:08.529073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.541 qpair failed and we were unable to recover it. 00:38:07.541 [2024-12-13 03:49:08.529246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.541 [2024-12-13 03:49:08.529260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.541 qpair failed and we were unable to recover it. 00:38:07.541 [2024-12-13 03:49:08.529464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.541 [2024-12-13 03:49:08.529477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.541 qpair failed and we were unable to recover it. 00:38:07.541 [2024-12-13 03:49:08.529634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.541 [2024-12-13 03:49:08.529648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.541 qpair failed and we were unable to recover it. 00:38:07.541 [2024-12-13 03:49:08.529746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.541 [2024-12-13 03:49:08.529759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.541 qpair failed and we were unable to recover it. 00:38:07.541 [2024-12-13 03:49:08.529834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.541 [2024-12-13 03:49:08.529848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.541 qpair failed and we were unable to recover it. 00:38:07.541 [2024-12-13 03:49:08.529995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.541 [2024-12-13 03:49:08.530009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.541 qpair failed and we were unable to recover it. 00:38:07.542 [2024-12-13 03:49:08.530102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.542 [2024-12-13 03:49:08.530116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.542 qpair failed and we were unable to recover it. 00:38:07.542 [2024-12-13 03:49:08.530250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.542 [2024-12-13 03:49:08.530263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.542 qpair failed and we were unable to recover it. 00:38:07.542 [2024-12-13 03:49:08.530466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.542 [2024-12-13 03:49:08.530480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.542 qpair failed and we were unable to recover it. 00:38:07.542 [2024-12-13 03:49:08.530630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.542 [2024-12-13 03:49:08.530643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.542 qpair failed and we were unable to recover it. 00:38:07.542 [2024-12-13 03:49:08.530814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.542 [2024-12-13 03:49:08.530827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.542 qpair failed and we were unable to recover it. 00:38:07.542 [2024-12-13 03:49:08.530904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.542 [2024-12-13 03:49:08.530926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.542 qpair failed and we were unable to recover it. 00:38:07.542 [2024-12-13 03:49:08.531082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.542 [2024-12-13 03:49:08.531096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.542 qpair failed and we were unable to recover it. 00:38:07.542 [2024-12-13 03:49:08.531317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.542 [2024-12-13 03:49:08.531330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.542 qpair failed and we were unable to recover it. 00:38:07.542 [2024-12-13 03:49:08.531495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.542 [2024-12-13 03:49:08.531508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.542 qpair failed and we were unable to recover it. 00:38:07.542 [2024-12-13 03:49:08.531577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.542 [2024-12-13 03:49:08.531590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.542 qpair failed and we were unable to recover it. 00:38:07.542 [2024-12-13 03:49:08.531722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.542 [2024-12-13 03:49:08.531735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.542 qpair failed and we were unable to recover it. 00:38:07.542 [2024-12-13 03:49:08.531832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.542 [2024-12-13 03:49:08.531846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.542 qpair failed and we were unable to recover it. 00:38:07.542 [2024-12-13 03:49:08.531995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.542 [2024-12-13 03:49:08.532008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.542 qpair failed and we were unable to recover it. 00:38:07.542 [2024-12-13 03:49:08.532106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.542 [2024-12-13 03:49:08.532120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.542 qpair failed and we were unable to recover it. 00:38:07.542 [2024-12-13 03:49:08.532201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.542 [2024-12-13 03:49:08.532213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.542 qpair failed and we were unable to recover it. 00:38:07.542 [2024-12-13 03:49:08.532352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.542 [2024-12-13 03:49:08.532365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.542 qpair failed and we were unable to recover it. 00:38:07.542 [2024-12-13 03:49:08.532565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.542 [2024-12-13 03:49:08.532579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.542 qpair failed and we were unable to recover it. 00:38:07.542 [2024-12-13 03:49:08.532740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.542 [2024-12-13 03:49:08.532766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.542 qpair failed and we were unable to recover it. 00:38:07.542 [2024-12-13 03:49:08.532949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.542 [2024-12-13 03:49:08.532971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.542 qpair failed and we were unable to recover it. 00:38:07.542 [2024-12-13 03:49:08.533119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.542 [2024-12-13 03:49:08.533140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.542 qpair failed and we were unable to recover it. 00:38:07.542 [2024-12-13 03:49:08.533222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.542 [2024-12-13 03:49:08.533242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.542 qpair failed and we were unable to recover it. 00:38:07.542 [2024-12-13 03:49:08.533354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.542 [2024-12-13 03:49:08.533375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.542 qpair failed and we were unable to recover it. 00:38:07.542 [2024-12-13 03:49:08.533539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.542 [2024-12-13 03:49:08.533560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.542 qpair failed and we were unable to recover it. 00:38:07.542 [2024-12-13 03:49:08.533753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.542 [2024-12-13 03:49:08.533774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.542 qpair failed and we were unable to recover it. 00:38:07.542 [2024-12-13 03:49:08.533940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.542 [2024-12-13 03:49:08.533961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.542 qpair failed and we were unable to recover it. 00:38:07.542 [2024-12-13 03:49:08.534128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.542 [2024-12-13 03:49:08.534149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.542 qpair failed and we were unable to recover it. 00:38:07.542 [2024-12-13 03:49:08.534313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.542 [2024-12-13 03:49:08.534333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.542 qpair failed and we were unable to recover it. 00:38:07.542 [2024-12-13 03:49:08.534481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.543 [2024-12-13 03:49:08.534502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.543 qpair failed and we were unable to recover it. 00:38:07.543 [2024-12-13 03:49:08.534669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.543 [2024-12-13 03:49:08.534689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.543 qpair failed and we were unable to recover it. 00:38:07.543 [2024-12-13 03:49:08.534879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.543 [2024-12-13 03:49:08.534900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.543 qpair failed and we were unable to recover it. 00:38:07.543 [2024-12-13 03:49:08.535159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.543 [2024-12-13 03:49:08.535184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.543 qpair failed and we were unable to recover it. 00:38:07.543 [2024-12-13 03:49:08.535348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.543 [2024-12-13 03:49:08.535369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.543 qpair failed and we were unable to recover it. 00:38:07.543 [2024-12-13 03:49:08.535599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.543 [2024-12-13 03:49:08.535620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.543 qpair failed and we were unable to recover it. 00:38:07.543 [2024-12-13 03:49:08.535873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.543 [2024-12-13 03:49:08.535888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.543 qpair failed and we were unable to recover it. 00:38:07.543 [2024-12-13 03:49:08.536027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.543 [2024-12-13 03:49:08.536041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.543 qpair failed and we were unable to recover it. 00:38:07.543 [2024-12-13 03:49:08.536199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.543 [2024-12-13 03:49:08.536213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.543 qpair failed and we were unable to recover it. 00:38:07.543 [2024-12-13 03:49:08.536435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.543 [2024-12-13 03:49:08.536448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.543 qpair failed and we were unable to recover it. 00:38:07.543 [2024-12-13 03:49:08.536610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.543 [2024-12-13 03:49:08.536623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.543 qpair failed and we were unable to recover it. 00:38:07.543 [2024-12-13 03:49:08.536759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.543 [2024-12-13 03:49:08.536772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.543 qpair failed and we were unable to recover it. 00:38:07.543 [2024-12-13 03:49:08.536868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.543 [2024-12-13 03:49:08.536885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.543 qpair failed and we were unable to recover it. 00:38:07.543 [2024-12-13 03:49:08.536961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.543 [2024-12-13 03:49:08.536974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.543 qpair failed and we were unable to recover it. 00:38:07.543 [2024-12-13 03:49:08.537144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.543 [2024-12-13 03:49:08.537158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.543 qpair failed and we were unable to recover it. 00:38:07.543 [2024-12-13 03:49:08.537379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.543 [2024-12-13 03:49:08.537393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.543 qpair failed and we were unable to recover it. 00:38:07.543 [2024-12-13 03:49:08.537490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.543 [2024-12-13 03:49:08.537504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.543 qpair failed and we were unable to recover it. 00:38:07.543 [2024-12-13 03:49:08.537660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.543 [2024-12-13 03:49:08.537674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.543 qpair failed and we were unable to recover it. 00:38:07.543 [2024-12-13 03:49:08.537761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.543 [2024-12-13 03:49:08.537774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.543 qpair failed and we were unable to recover it. 00:38:07.543 [2024-12-13 03:49:08.537925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.543 [2024-12-13 03:49:08.537939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.543 qpair failed and we were unable to recover it. 00:38:07.543 [2024-12-13 03:49:08.538087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.543 [2024-12-13 03:49:08.538100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.543 qpair failed and we were unable to recover it. 00:38:07.543 [2024-12-13 03:49:08.538234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.543 [2024-12-13 03:49:08.538247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.543 qpair failed and we were unable to recover it. 00:38:07.543 [2024-12-13 03:49:08.538472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.543 [2024-12-13 03:49:08.538485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.543 qpair failed and we were unable to recover it. 00:38:07.543 [2024-12-13 03:49:08.538728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.543 [2024-12-13 03:49:08.538741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.543 qpair failed and we were unable to recover it. 00:38:07.543 [2024-12-13 03:49:08.538970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.543 [2024-12-13 03:49:08.538984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.543 qpair failed and we were unable to recover it. 00:38:07.543 [2024-12-13 03:49:08.539051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.543 [2024-12-13 03:49:08.539064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.543 qpair failed and we were unable to recover it. 00:38:07.543 [2024-12-13 03:49:08.539161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.543 [2024-12-13 03:49:08.539174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.543 qpair failed and we were unable to recover it. 00:38:07.543 [2024-12-13 03:49:08.539273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.543 [2024-12-13 03:49:08.539286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.543 qpair failed and we were unable to recover it. 00:38:07.543 [2024-12-13 03:49:08.539432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.543 [2024-12-13 03:49:08.539446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.543 qpair failed and we were unable to recover it. 00:38:07.543 [2024-12-13 03:49:08.539622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.543 [2024-12-13 03:49:08.539635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.543 qpair failed and we were unable to recover it. 00:38:07.543 [2024-12-13 03:49:08.539870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.543 [2024-12-13 03:49:08.539894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.543 qpair failed and we were unable to recover it. 00:38:07.543 [2024-12-13 03:49:08.540015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.543 [2024-12-13 03:49:08.540041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.543 qpair failed and we were unable to recover it. 00:38:07.543 [2024-12-13 03:49:08.540289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.543 [2024-12-13 03:49:08.540309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.543 qpair failed and we were unable to recover it. 00:38:07.543 [2024-12-13 03:49:08.540410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.543 [2024-12-13 03:49:08.540431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.543 qpair failed and we were unable to recover it. 00:38:07.543 [2024-12-13 03:49:08.540541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.543 [2024-12-13 03:49:08.540563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.543 qpair failed and we were unable to recover it. 00:38:07.544 [2024-12-13 03:49:08.540658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.544 [2024-12-13 03:49:08.540679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.544 qpair failed and we were unable to recover it. 00:38:07.544 [2024-12-13 03:49:08.540844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.544 [2024-12-13 03:49:08.540865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.544 qpair failed and we were unable to recover it. 00:38:07.544 [2024-12-13 03:49:08.540970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.544 [2024-12-13 03:49:08.540992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.544 qpair failed and we were unable to recover it. 00:38:07.544 [2024-12-13 03:49:08.541235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.544 [2024-12-13 03:49:08.541256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.544 qpair failed and we were unable to recover it. 00:38:07.544 [2024-12-13 03:49:08.541353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.544 [2024-12-13 03:49:08.541373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.544 qpair failed and we were unable to recover it. 00:38:07.544 [2024-12-13 03:49:08.541476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.544 [2024-12-13 03:49:08.541496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.544 qpair failed and we were unable to recover it. 00:38:07.544 [2024-12-13 03:49:08.541666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.544 [2024-12-13 03:49:08.541686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.544 qpair failed and we were unable to recover it. 00:38:07.544 [2024-12-13 03:49:08.541902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.544 [2024-12-13 03:49:08.541921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.544 qpair failed and we were unable to recover it. 00:38:07.544 [2024-12-13 03:49:08.542016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.544 [2024-12-13 03:49:08.542032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.544 qpair failed and we were unable to recover it. 00:38:07.544 [2024-12-13 03:49:08.542253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.544 [2024-12-13 03:49:08.542266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.544 qpair failed and we were unable to recover it. 00:38:07.544 [2024-12-13 03:49:08.542359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.544 [2024-12-13 03:49:08.542372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.544 qpair failed and we were unable to recover it. 00:38:07.544 [2024-12-13 03:49:08.542522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.544 [2024-12-13 03:49:08.542535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.544 qpair failed and we were unable to recover it. 00:38:07.544 [2024-12-13 03:49:08.542627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.544 [2024-12-13 03:49:08.542639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.544 qpair failed and we were unable to recover it. 00:38:07.544 [2024-12-13 03:49:08.542859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.544 [2024-12-13 03:49:08.542872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.544 qpair failed and we were unable to recover it. 00:38:07.544 [2024-12-13 03:49:08.543003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.544 [2024-12-13 03:49:08.543017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.544 qpair failed and we were unable to recover it. 00:38:07.544 [2024-12-13 03:49:08.543098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.544 [2024-12-13 03:49:08.543111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.544 qpair failed and we were unable to recover it. 00:38:07.544 [2024-12-13 03:49:08.543197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.544 [2024-12-13 03:49:08.543210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.544 qpair failed and we were unable to recover it. 00:38:07.544 [2024-12-13 03:49:08.543389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.544 [2024-12-13 03:49:08.543402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.544 qpair failed and we were unable to recover it. 00:38:07.544 [2024-12-13 03:49:08.543483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.544 [2024-12-13 03:49:08.543497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.544 qpair failed and we were unable to recover it. 00:38:07.544 [2024-12-13 03:49:08.543574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.544 [2024-12-13 03:49:08.543588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.544 qpair failed and we were unable to recover it. 00:38:07.544 [2024-12-13 03:49:08.543737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.544 [2024-12-13 03:49:08.543751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.544 qpair failed and we were unable to recover it. 00:38:07.544 [2024-12-13 03:49:08.543887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.544 [2024-12-13 03:49:08.543905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.544 qpair failed and we were unable to recover it. 00:38:07.544 [2024-12-13 03:49:08.544007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.544 [2024-12-13 03:49:08.544020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.544 qpair failed and we were unable to recover it. 00:38:07.544 [2024-12-13 03:49:08.544107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.544 [2024-12-13 03:49:08.544120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.544 qpair failed and we were unable to recover it. 00:38:07.544 [2024-12-13 03:49:08.544272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.544 [2024-12-13 03:49:08.544286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.544 qpair failed and we were unable to recover it. 00:38:07.544 [2024-12-13 03:49:08.544355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.544 [2024-12-13 03:49:08.544367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.544 qpair failed and we were unable to recover it. 00:38:07.544 [2024-12-13 03:49:08.544453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.544 [2024-12-13 03:49:08.544466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.544 qpair failed and we were unable to recover it. 00:38:07.544 [2024-12-13 03:49:08.544604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.544 [2024-12-13 03:49:08.544617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.544 qpair failed and we were unable to recover it. 00:38:07.544 [2024-12-13 03:49:08.544690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.544 [2024-12-13 03:49:08.544703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.544 qpair failed and we were unable to recover it. 00:38:07.544 [2024-12-13 03:49:08.544858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.544 [2024-12-13 03:49:08.544871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.544 qpair failed and we were unable to recover it. 00:38:07.544 [2024-12-13 03:49:08.544950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.544 [2024-12-13 03:49:08.544964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.544 qpair failed and we were unable to recover it. 00:38:07.544 [2024-12-13 03:49:08.545042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.544 [2024-12-13 03:49:08.545055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.544 qpair failed and we were unable to recover it. 00:38:07.544 [2024-12-13 03:49:08.545255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.544 [2024-12-13 03:49:08.545269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.544 qpair failed and we were unable to recover it. 00:38:07.544 [2024-12-13 03:49:08.545417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.544 [2024-12-13 03:49:08.545430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.544 qpair failed and we were unable to recover it. 00:38:07.544 [2024-12-13 03:49:08.545517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.545 [2024-12-13 03:49:08.545531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:07.545 qpair failed and we were unable to recover it. 00:38:07.545 [2024-12-13 03:49:08.545693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.545 [2024-12-13 03:49:08.545716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:07.545 qpair failed and we were unable to recover it. 00:38:07.545 [2024-12-13 03:49:08.545826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.545 [2024-12-13 03:49:08.545850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:07.545 qpair failed and we were unable to recover it. 00:38:07.545 A controller has encountered a failure and is being reset. 00:38:07.545 [2024-12-13 03:49:08.546111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.545 [2024-12-13 03:49:08.546154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325d00 with addr=10.0.0.2, port=4420 00:38:07.545 [2024-12-13 03:49:08.546175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325d00 is same with the state(6) to be set 00:38:07.545 [2024-12-13 03:49:08.546207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325d00 (9): Bad file descriptor 00:38:07.545 [2024-12-13 03:49:08.546229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:38:07.545 [2024-12-13 03:49:08.546250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:38:07.545 [2024-12-13 03:49:08.546272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:38:07.545 Unable to reset the controller. 00:38:08.113 03:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:08.113 03:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:38:08.113 03:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:08.113 03:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:08.113 03:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:08.113 03:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:08.113 03:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:08.113 03:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.113 03:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:08.113 Malloc0 00:38:08.113 03:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.113 03:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:38:08.113 03:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.113 03:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:08.113 [2024-12-13 03:49:09.192099] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:08.113 03:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.113 03:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:08.113 03:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.113 03:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:08.113 03:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.113 03:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:08.113 03:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.114 03:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:08.114 03:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.114 03:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:08.114 03:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.114 03:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:08.114 [2024-12-13 03:49:09.220392] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:08.114 03:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.114 03:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:08.114 03:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:08.114 03:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:08.114 03:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.114 03:49:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2918216 00:38:08.681 Controller properly reset. 00:38:13.954 Initializing NVMe Controllers 00:38:13.954 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:13.954 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:13.954 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:38:13.954 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:38:13.954 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:38:13.954 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:38:13.954 Initialization complete. Launching workers. 00:38:13.954 Starting thread on core 1 00:38:13.954 Starting thread on core 2 00:38:13.954 Starting thread on core 3 00:38:13.954 Starting thread on core 0 00:38:13.954 03:49:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:38:13.954 00:38:13.954 real 0m11.505s 00:38:13.954 user 0m36.544s 00:38:13.954 sys 0m6.066s 00:38:13.954 03:49:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:13.954 03:49:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:13.954 ************************************ 00:38:13.954 END TEST nvmf_target_disconnect_tc2 00:38:13.954 ************************************ 00:38:13.954 03:49:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:38:13.954 03:49:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:38:13.954 03:49:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:38:13.954 03:49:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:13.954 03:49:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:38:13.954 03:49:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:13.955 03:49:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:38:13.955 03:49:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:13.955 03:49:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:13.955 rmmod nvme_tcp 00:38:13.955 rmmod nvme_fabrics 00:38:13.955 rmmod nvme_keyring 00:38:13.955 03:49:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:13.955 03:49:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:38:13.955 03:49:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:38:13.955 03:49:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2918881 ']' 00:38:13.955 03:49:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2918881 00:38:13.955 03:49:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2918881 ']' 00:38:13.955 03:49:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2918881 00:38:13.955 03:49:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:38:13.955 03:49:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:13.955 03:49:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2918881 00:38:13.955 03:49:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:38:13.955 03:49:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:38:13.955 03:49:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2918881' 00:38:13.955 killing process with pid 2918881 00:38:13.955 03:49:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2918881 00:38:13.955 03:49:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2918881 00:38:14.892 03:49:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:14.892 03:49:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:14.892 03:49:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:14.892 03:49:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:38:14.892 03:49:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:38:14.892 03:49:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:38:14.892 03:49:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:14.892 03:49:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:14.892 03:49:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:14.892 03:49:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:14.892 03:49:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:14.892 03:49:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:17.431 03:49:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:17.431 00:38:17.431 real 0m20.983s 00:38:17.431 user 1m7.038s 00:38:17.431 sys 0m10.961s 00:38:17.431 03:49:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:17.431 03:49:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:17.431 ************************************ 00:38:17.431 END TEST nvmf_target_disconnect 00:38:17.431 ************************************ 00:38:17.431 03:49:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:38:17.431 00:38:17.431 real 8m8.322s 00:38:17.431 user 19m23.125s 00:38:17.431 sys 2m7.462s 00:38:17.431 03:49:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:17.431 03:49:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:17.431 ************************************ 00:38:17.431 END TEST nvmf_host 00:38:17.431 ************************************ 00:38:17.431 03:49:18 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:38:17.431 03:49:18 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:38:17.431 03:49:18 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:38:17.431 03:49:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:17.431 03:49:18 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:17.431 03:49:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:17.431 ************************************ 00:38:17.431 START TEST nvmf_target_core_interrupt_mode 00:38:17.431 ************************************ 00:38:17.431 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:38:17.431 * Looking for test storage... 00:38:17.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:38:17.431 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:17.431 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:38:17.431 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:17.431 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:17.431 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:17.431 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:17.431 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:17.431 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:38:17.431 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:38:17.431 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:38:17.431 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:38:17.431 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:38:17.431 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:38:17.431 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:38:17.431 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:17.431 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:38:17.431 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:38:17.431 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:17.431 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:17.431 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:38:17.431 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:38:17.431 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:17.431 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:38:17.431 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:38:17.431 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:38:17.431 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:38:17.431 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:17.431 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:38:17.431 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:38:17.431 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:17.431 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:17.431 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:38:17.431 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:17.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:17.432 --rc genhtml_branch_coverage=1 00:38:17.432 --rc genhtml_function_coverage=1 00:38:17.432 --rc genhtml_legend=1 00:38:17.432 --rc geninfo_all_blocks=1 00:38:17.432 --rc geninfo_unexecuted_blocks=1 00:38:17.432 00:38:17.432 ' 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:17.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:17.432 --rc genhtml_branch_coverage=1 00:38:17.432 --rc genhtml_function_coverage=1 00:38:17.432 --rc genhtml_legend=1 00:38:17.432 --rc geninfo_all_blocks=1 00:38:17.432 --rc geninfo_unexecuted_blocks=1 00:38:17.432 00:38:17.432 ' 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:17.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:17.432 --rc genhtml_branch_coverage=1 00:38:17.432 --rc genhtml_function_coverage=1 00:38:17.432 --rc genhtml_legend=1 00:38:17.432 --rc geninfo_all_blocks=1 00:38:17.432 --rc geninfo_unexecuted_blocks=1 00:38:17.432 00:38:17.432 ' 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:17.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:17.432 --rc genhtml_branch_coverage=1 00:38:17.432 --rc genhtml_function_coverage=1 00:38:17.432 --rc genhtml_legend=1 00:38:17.432 --rc geninfo_all_blocks=1 00:38:17.432 --rc geninfo_unexecuted_blocks=1 00:38:17.432 00:38:17.432 ' 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:17.432 ************************************ 00:38:17.432 START TEST nvmf_abort 00:38:17.432 ************************************ 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:38:17.432 * Looking for test storage... 00:38:17.432 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:17.432 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:38:17.433 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:38:17.433 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:17.433 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:17.433 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:38:17.433 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:38:17.433 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:17.433 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:38:17.433 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:17.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:17.693 --rc genhtml_branch_coverage=1 00:38:17.693 --rc genhtml_function_coverage=1 00:38:17.693 --rc genhtml_legend=1 00:38:17.693 --rc geninfo_all_blocks=1 00:38:17.693 --rc geninfo_unexecuted_blocks=1 00:38:17.693 00:38:17.693 ' 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:17.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:17.693 --rc genhtml_branch_coverage=1 00:38:17.693 --rc genhtml_function_coverage=1 00:38:17.693 --rc genhtml_legend=1 00:38:17.693 --rc geninfo_all_blocks=1 00:38:17.693 --rc geninfo_unexecuted_blocks=1 00:38:17.693 00:38:17.693 ' 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:17.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:17.693 --rc genhtml_branch_coverage=1 00:38:17.693 --rc genhtml_function_coverage=1 00:38:17.693 --rc genhtml_legend=1 00:38:17.693 --rc geninfo_all_blocks=1 00:38:17.693 --rc geninfo_unexecuted_blocks=1 00:38:17.693 00:38:17.693 ' 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:17.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:17.693 --rc genhtml_branch_coverage=1 00:38:17.693 --rc genhtml_function_coverage=1 00:38:17.693 --rc genhtml_legend=1 00:38:17.693 --rc geninfo_all_blocks=1 00:38:17.693 --rc geninfo_unexecuted_blocks=1 00:38:17.693 00:38:17.693 ' 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:17.693 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:38:17.694 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:17.694 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:38:17.694 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:17.694 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:17.694 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:17.694 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:17.694 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:17.694 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:17.694 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:17.694 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:17.694 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:17.694 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:17.694 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:17.694 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:38:17.694 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:38:17.694 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:17.694 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:17.694 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:17.694 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:17.694 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:17.694 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:17.694 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:17.694 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:17.694 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:17.694 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:17.694 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:38:17.694 03:49:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:23.065 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:23.065 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:38:23.065 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:23.065 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:23.065 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:23.065 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:23.065 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:23.065 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:38:23.065 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:23.065 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:38:23.065 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:38:23.065 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:38:23.065 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:38:23.065 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:38:23.065 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:38:23.065 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:23.065 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:23.065 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:23.066 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:23.066 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:23.066 Found net devices under 0000:af:00.0: cvl_0_0 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:23.066 Found net devices under 0000:af:00.1: cvl_0_1 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:23.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:23.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:38:23.066 00:38:23.066 --- 10.0.0.2 ping statistics --- 00:38:23.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:23.066 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:23.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:23.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:38:23.066 00:38:23.066 --- 10.0.0.1 ping statistics --- 00:38:23.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:23.066 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:23.066 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:23.067 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:23.067 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:23.067 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:23.067 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:23.067 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:38:23.067 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:23.067 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:23.067 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:23.067 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:38:23.067 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2923545 00:38:23.067 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2923545 00:38:23.067 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2923545 ']' 00:38:23.067 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:23.067 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:23.067 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:23.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:23.067 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:23.067 03:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:23.067 [2024-12-13 03:49:23.938994] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:23.067 [2024-12-13 03:49:23.941059] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:38:23.067 [2024-12-13 03:49:23.941124] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:23.067 [2024-12-13 03:49:24.058309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:23.067 [2024-12-13 03:49:24.166122] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:23.067 [2024-12-13 03:49:24.166163] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:23.067 [2024-12-13 03:49:24.166175] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:23.067 [2024-12-13 03:49:24.166183] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:23.067 [2024-12-13 03:49:24.166192] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:23.067 [2024-12-13 03:49:24.168290] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:38:23.067 [2024-12-13 03:49:24.168353] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:23.067 [2024-12-13 03:49:24.168365] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:38:23.325 [2024-12-13 03:49:24.485965] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:23.325 [2024-12-13 03:49:24.487125] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:23.325 [2024-12-13 03:49:24.487688] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:23.325 [2024-12-13 03:49:24.487910] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:23.585 03:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:23.585 03:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:38:23.585 03:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:23.585 03:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:23.585 03:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:23.585 03:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:23.585 03:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:38:23.585 03:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.585 03:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:23.845 [2024-12-13 03:49:24.797355] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:23.845 03:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.846 03:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:38:23.846 03:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.846 03:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:23.846 Malloc0 00:38:23.846 03:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.846 03:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:23.846 03:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.846 03:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:23.846 Delay0 00:38:23.846 03:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.846 03:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:23.846 03:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.846 03:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:23.846 03:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.846 03:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:38:23.846 03:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.846 03:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:23.846 03:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.846 03:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:23.846 03:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.846 03:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:23.846 [2024-12-13 03:49:24.933245] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:23.846 03:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.846 03:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:23.846 03:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.846 03:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:23.846 03:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.846 03:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:38:23.846 [2024-12-13 03:49:25.044245] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:38:26.389 Initializing NVMe Controllers 00:38:26.390 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:38:26.390 controller IO queue size 128 less than required 00:38:26.390 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:38:26.390 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:38:26.390 Initialization complete. Launching workers. 00:38:26.390 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 34185 00:38:26.390 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34242, failed to submit 66 00:38:26.390 success 34185, unsuccessful 57, failed 0 00:38:26.390 03:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:26.390 03:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:26.390 03:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:26.390 03:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:26.390 03:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:38:26.390 03:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:38:26.390 03:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:26.390 03:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:38:26.390 03:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:26.390 03:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:38:26.390 03:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:26.390 03:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:26.390 rmmod nvme_tcp 00:38:26.390 rmmod nvme_fabrics 00:38:26.390 rmmod nvme_keyring 00:38:26.390 03:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:26.390 03:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:38:26.390 03:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:38:26.390 03:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2923545 ']' 00:38:26.390 03:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2923545 00:38:26.390 03:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2923545 ']' 00:38:26.390 03:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2923545 00:38:26.390 03:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:38:26.390 03:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:26.390 03:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2923545 00:38:26.390 03:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:26.390 03:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:26.390 03:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2923545' 00:38:26.390 killing process with pid 2923545 00:38:26.390 03:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2923545 00:38:26.390 03:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2923545 00:38:27.771 03:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:27.772 03:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:27.772 03:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:27.772 03:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:38:27.772 03:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:38:27.772 03:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:27.772 03:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:38:27.772 03:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:27.772 03:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:27.772 03:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:27.772 03:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:27.772 03:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:29.681 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:29.681 00:38:29.681 real 0m12.242s 00:38:29.681 user 0m12.094s 00:38:29.681 sys 0m5.274s 00:38:29.681 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:29.681 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:29.681 ************************************ 00:38:29.681 END TEST nvmf_abort 00:38:29.681 ************************************ 00:38:29.681 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:38:29.681 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:29.681 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:29.681 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:29.682 ************************************ 00:38:29.682 START TEST nvmf_ns_hotplug_stress 00:38:29.682 ************************************ 00:38:29.682 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:38:29.682 * Looking for test storage... 00:38:29.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:29.682 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:29.682 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:38:29.682 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:29.942 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:29.942 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:29.942 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:29.942 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:29.942 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:38:29.942 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:38:29.942 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:38:29.942 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:38:29.942 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:38:29.942 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:38:29.942 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:38:29.942 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:29.942 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:38:29.942 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:38:29.942 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:29.942 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:29.942 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:38:29.942 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:38:29.942 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:29.942 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:38:29.942 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:38:29.942 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:38:29.942 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:38:29.942 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:29.942 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:38:29.942 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:38:29.942 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:29.942 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:29.942 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:38:29.942 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:29.942 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:29.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:29.942 --rc genhtml_branch_coverage=1 00:38:29.942 --rc genhtml_function_coverage=1 00:38:29.942 --rc genhtml_legend=1 00:38:29.942 --rc geninfo_all_blocks=1 00:38:29.942 --rc geninfo_unexecuted_blocks=1 00:38:29.942 00:38:29.942 ' 00:38:29.942 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:29.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:29.942 --rc genhtml_branch_coverage=1 00:38:29.942 --rc genhtml_function_coverage=1 00:38:29.942 --rc genhtml_legend=1 00:38:29.942 --rc geninfo_all_blocks=1 00:38:29.942 --rc geninfo_unexecuted_blocks=1 00:38:29.942 00:38:29.942 ' 00:38:29.942 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:29.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:29.942 --rc genhtml_branch_coverage=1 00:38:29.942 --rc genhtml_function_coverage=1 00:38:29.942 --rc genhtml_legend=1 00:38:29.942 --rc geninfo_all_blocks=1 00:38:29.942 --rc geninfo_unexecuted_blocks=1 00:38:29.942 00:38:29.943 ' 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:29.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:29.943 --rc genhtml_branch_coverage=1 00:38:29.943 --rc genhtml_function_coverage=1 00:38:29.943 --rc genhtml_legend=1 00:38:29.943 --rc geninfo_all_blocks=1 00:38:29.943 --rc geninfo_unexecuted_blocks=1 00:38:29.943 00:38:29.943 ' 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:38:29.943 03:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:35.221 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:35.221 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:35.221 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:35.222 Found net devices under 0000:af:00.0: cvl_0_0 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:35.222 Found net devices under 0000:af:00.1: cvl_0_1 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:35.222 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:35.481 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:35.482 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:35.482 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:35.482 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:35.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:35.482 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:38:35.482 00:38:35.482 --- 10.0.0.2 ping statistics --- 00:38:35.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:35.482 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:38:35.482 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:35.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:35.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:38:35.482 00:38:35.482 --- 10.0.0.1 ping statistics --- 00:38:35.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:35.482 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:38:35.482 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:35.482 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:38:35.482 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:35.482 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:35.482 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:35.482 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:35.482 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:35.482 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:35.482 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:35.482 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:38:35.482 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:35.482 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:35.482 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:35.482 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2927692 00:38:35.482 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2927692 00:38:35.482 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:38:35.482 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2927692 ']' 00:38:35.482 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:35.482 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:35.482 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:35.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:35.482 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:35.482 03:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:35.482 [2024-12-13 03:49:36.606500] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:35.482 [2024-12-13 03:49:36.608600] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:38:35.482 [2024-12-13 03:49:36.608668] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:35.742 [2024-12-13 03:49:36.725280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:35.742 [2024-12-13 03:49:36.832662] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:35.742 [2024-12-13 03:49:36.832703] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:35.742 [2024-12-13 03:49:36.832715] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:35.742 [2024-12-13 03:49:36.832723] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:35.742 [2024-12-13 03:49:36.832732] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:35.742 [2024-12-13 03:49:36.834925] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:38:35.742 [2024-12-13 03:49:36.834977] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:35.742 [2024-12-13 03:49:36.834989] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:38:36.002 [2024-12-13 03:49:37.139033] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:36.002 [2024-12-13 03:49:37.140049] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:36.002 [2024-12-13 03:49:37.140572] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:36.002 [2024-12-13 03:49:37.140783] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:36.262 03:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:36.262 03:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:38:36.262 03:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:36.262 03:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:36.262 03:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:36.262 03:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:36.262 03:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:38:36.262 03:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:36.521 [2024-12-13 03:49:37.632000] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:36.521 03:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:36.780 03:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:37.039 [2024-12-13 03:49:38.028230] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:37.039 03:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:37.039 03:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:38:37.298 Malloc0 00:38:37.298 03:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:37.557 Delay0 00:38:37.557 03:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:37.816 03:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:38:37.816 NULL1 00:38:37.816 03:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:38:38.075 03:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2928166 00:38:38.075 03:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:38:38.075 03:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:38:38.075 03:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:38.334 03:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:38.593 03:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:38:38.593 03:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:38:38.853 true 00:38:38.853 03:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:38:38.853 03:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:39.112 03:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:39.112 03:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:38:39.112 03:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:38:39.373 true 00:38:39.373 03:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:38:39.373 03:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:39.632 03:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:39.891 03:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:38:39.891 03:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:38:39.891 true 00:38:40.150 03:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:38:40.150 03:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:40.150 03:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:40.409 03:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:38:40.409 03:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:38:40.668 true 00:38:40.668 03:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:38:40.668 03:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:40.927 03:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:41.185 03:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:38:41.185 03:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:38:41.445 true 00:38:41.445 03:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:38:41.445 03:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:41.445 03:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:41.704 03:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:38:41.704 03:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:38:41.963 true 00:38:41.963 03:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:38:41.963 03:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:42.221 03:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:42.480 03:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:38:42.480 03:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:38:42.740 true 00:38:42.740 03:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:38:42.740 03:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:42.740 03:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:42.999 03:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:38:42.999 03:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:38:43.258 true 00:38:43.258 03:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:38:43.258 03:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:43.517 03:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:43.775 03:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:38:43.775 03:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:38:44.034 true 00:38:44.034 03:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:38:44.034 03:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:44.292 03:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:44.292 03:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:38:44.292 03:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:38:44.550 true 00:38:44.550 03:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:38:44.550 03:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:44.809 03:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:45.067 03:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:38:45.067 03:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:38:45.326 true 00:38:45.326 03:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:38:45.326 03:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:45.585 03:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:45.585 03:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:38:45.585 03:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:38:45.844 true 00:38:45.844 03:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:38:45.844 03:49:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:46.103 03:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:46.362 03:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:38:46.362 03:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:38:46.621 true 00:38:46.621 03:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:38:46.621 03:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:46.880 03:49:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:46.880 03:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:38:46.880 03:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:38:47.139 true 00:38:47.139 03:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:38:47.139 03:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:47.397 03:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:47.657 03:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:38:47.657 03:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:38:47.915 true 00:38:47.915 03:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:38:47.915 03:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:48.174 03:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:48.432 03:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:38:48.432 03:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:38:48.432 true 00:38:48.432 03:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:38:48.432 03:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:48.691 03:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:48.950 03:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:38:48.950 03:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:38:49.209 true 00:38:49.209 03:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:38:49.209 03:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:49.467 03:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:49.726 03:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:38:49.726 03:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:38:49.726 true 00:38:49.726 03:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:38:49.726 03:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:50.002 03:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:50.261 03:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:38:50.261 03:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:38:50.520 true 00:38:50.520 03:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:38:50.520 03:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:50.779 03:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:51.038 03:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:38:51.038 03:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:38:51.038 true 00:38:51.038 03:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:38:51.038 03:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:51.297 03:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:51.556 03:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:38:51.556 03:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:38:51.814 true 00:38:51.814 03:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:38:51.814 03:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:52.073 03:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:52.331 03:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:38:52.331 03:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:38:52.331 true 00:38:52.590 03:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:38:52.590 03:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:52.590 03:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:52.850 03:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:38:52.850 03:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:38:53.109 true 00:38:53.109 03:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:38:53.109 03:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:53.368 03:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:53.627 03:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:38:53.627 03:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:38:53.627 true 00:38:53.886 03:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:38:53.886 03:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:53.886 03:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:54.144 03:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:38:54.144 03:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:38:54.401 true 00:38:54.401 03:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:38:54.401 03:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:54.660 03:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:54.918 03:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:38:54.918 03:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:38:54.918 true 00:38:55.176 03:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:38:55.176 03:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:55.176 03:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:55.434 03:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:38:55.434 03:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:38:55.693 true 00:38:55.693 03:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:38:55.693 03:49:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:55.952 03:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:56.211 03:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:38:56.211 03:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:38:56.472 true 00:38:56.472 03:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:38:56.472 03:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:56.472 03:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:56.731 03:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:38:56.731 03:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:38:56.990 true 00:38:56.990 03:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:38:56.990 03:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:57.249 03:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:57.508 03:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:38:57.508 03:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:38:57.767 true 00:38:57.767 03:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:38:57.767 03:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:58.025 03:49:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:58.025 03:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:38:58.025 03:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:38:58.284 true 00:38:58.284 03:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:38:58.284 03:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:58.543 03:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:58.802 03:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:38:58.802 03:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:38:59.062 true 00:38:59.062 03:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:38:59.062 03:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:59.321 03:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:59.321 03:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:38:59.321 03:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:38:59.580 true 00:38:59.580 03:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:38:59.580 03:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:59.839 03:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:00.099 03:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:39:00.099 03:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:39:00.358 true 00:39:00.358 03:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:39:00.358 03:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:00.633 03:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:00.633 03:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:39:00.633 03:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:39:00.893 true 00:39:00.893 03:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:39:00.893 03:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:01.152 03:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:01.411 03:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:39:01.411 03:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:39:01.671 true 00:39:01.671 03:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:39:01.671 03:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:01.931 03:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:01.931 03:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:39:01.931 03:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:39:02.190 true 00:39:02.190 03:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:39:02.190 03:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:02.450 03:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:02.710 03:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:39:02.710 03:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:39:02.969 true 00:39:02.969 03:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:39:02.969 03:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:03.228 03:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:03.487 03:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:39:03.487 03:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:39:03.487 true 00:39:03.487 03:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:39:03.487 03:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:03.746 03:50:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:04.005 03:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:39:04.005 03:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:39:04.264 true 00:39:04.264 03:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:39:04.264 03:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:04.522 03:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:04.781 03:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:39:04.781 03:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:39:04.781 true 00:39:04.781 03:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:39:04.781 03:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:05.041 03:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:05.315 03:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:39:05.315 03:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:39:05.574 true 00:39:05.574 03:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:39:05.574 03:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:05.834 03:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:06.093 03:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:39:06.093 03:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:39:06.093 true 00:39:06.093 03:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:39:06.093 03:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:06.352 03:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:06.611 03:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:39:06.611 03:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:39:06.870 true 00:39:06.870 03:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:39:06.870 03:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:07.129 03:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:07.388 03:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:39:07.388 03:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:39:07.388 true 00:39:07.388 03:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:39:07.388 03:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:07.647 03:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:07.906 03:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:39:07.906 03:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:39:08.165 true 00:39:08.165 03:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:39:08.165 03:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:08.424 Initializing NVMe Controllers 00:39:08.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:08.424 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:39:08.424 Controller IO queue size 128, less than required. 00:39:08.424 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:08.424 WARNING: Some requested NVMe devices were skipped 00:39:08.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:39:08.424 Initialization complete. Launching workers. 00:39:08.424 ======================================================== 00:39:08.425 Latency(us) 00:39:08.425 Device Information : IOPS MiB/s Average min max 00:39:08.425 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 24074.83 11.76 5316.59 1596.49 10302.92 00:39:08.425 ======================================================== 00:39:08.425 Total : 24074.83 11.76 5316.59 1596.49 10302.92 00:39:08.425 00:39:08.425 03:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:08.684 03:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:39:08.684 03:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:39:08.684 true 00:39:08.684 03:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2928166 00:39:08.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2928166) - No such process 00:39:08.684 03:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2928166 00:39:08.684 03:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:08.943 03:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:09.202 03:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:39:09.202 03:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:39:09.202 03:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:39:09.202 03:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:09.202 03:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:39:09.461 null0 00:39:09.461 03:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:09.461 03:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:09.461 03:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:39:09.461 null1 00:39:09.461 03:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:09.461 03:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:09.461 03:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:39:09.719 null2 00:39:09.719 03:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:09.719 03:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:09.719 03:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:39:09.978 null3 00:39:09.978 03:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:09.978 03:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:09.978 03:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:39:09.978 null4 00:39:10.237 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:10.237 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:10.237 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:39:10.237 null5 00:39:10.237 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:10.237 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:10.237 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:39:10.496 null6 00:39:10.496 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:10.496 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:10.496 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:39:10.756 null7 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:10.756 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:10.757 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:10.757 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:10.757 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:39:10.757 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:10.757 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2933193 2933195 2933196 2933198 2933200 2933202 2933204 2933206 00:39:10.757 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:39:10.757 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:10.757 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:10.757 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:10.757 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:10.757 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:11.016 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:11.016 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:11.016 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:11.016 03:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:11.016 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:11.016 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:11.016 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:11.016 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:11.016 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:11.016 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:11.016 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:11.016 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:11.016 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:11.016 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:11.016 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:11.016 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:11.016 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:11.016 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:11.016 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:11.016 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:11.016 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:11.016 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:11.016 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:11.016 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:11.016 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:11.016 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:11.016 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:11.016 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:11.016 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:11.016 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:11.275 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:11.275 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:11.275 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:11.275 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:11.275 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:11.275 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:11.275 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:11.275 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:11.554 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:11.554 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:11.554 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:11.554 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:11.554 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:11.554 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:11.554 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:11.554 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:11.554 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:11.554 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:11.554 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:11.554 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:11.554 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:11.554 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:11.554 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:11.554 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:11.554 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:11.554 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:11.554 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:11.554 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:11.554 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:11.554 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:11.554 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:11.554 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:11.836 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:11.836 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:11.836 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:11.836 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:11.836 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:11.836 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:11.836 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:11.836 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:11.836 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:11.836 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:11.836 03:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:11.836 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:11.836 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:11.836 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:11.836 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:11.836 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:11.836 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:11.836 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:11.836 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:11.836 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:11.836 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:11.836 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:11.836 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:11.836 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:11.836 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:11.836 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:11.836 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:11.836 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:11.836 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:11.836 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:11.836 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:11.836 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:12.116 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:12.116 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:12.116 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:12.116 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:12.116 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:12.116 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:12.116 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:12.116 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:12.393 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:12.393 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:12.393 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:12.394 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:12.394 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:12.394 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:12.394 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:12.394 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:12.394 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:12.394 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:12.394 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:12.394 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:12.394 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:12.394 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:12.394 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:12.394 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:12.394 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:12.394 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:12.394 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:12.394 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:12.394 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:12.394 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:12.394 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:12.394 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:12.653 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:12.653 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:12.653 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:12.653 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:12.653 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:12.653 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:12.653 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:12.653 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:12.653 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:12.653 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:12.653 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:12.653 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:12.653 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:12.653 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:12.653 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:12.653 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:12.653 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:12.653 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:12.653 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:12.653 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:12.653 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:12.653 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:12.653 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:12.912 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:12.912 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:12.912 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:12.912 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:12.912 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:12.912 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:12.912 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:12.912 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:12.912 03:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:12.912 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:12.912 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:12.912 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:12.912 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:12.912 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:12.912 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:12.912 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:12.912 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:13.172 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:13.172 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:13.172 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:13.172 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:13.172 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:13.172 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:13.172 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:13.172 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:13.172 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:13.172 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:13.172 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:13.172 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:13.172 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:13.172 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:13.172 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:13.172 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:13.172 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:13.172 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:13.172 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:13.172 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:13.172 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:13.172 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:13.172 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:13.172 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:13.431 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:13.431 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:13.431 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:13.431 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:13.431 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:13.431 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:13.431 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:13.431 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:13.690 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:13.690 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:13.690 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:13.690 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:13.690 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:13.690 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:13.690 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:13.690 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:13.690 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:13.690 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:13.690 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:13.690 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:13.690 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:13.690 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:13.690 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:13.690 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:13.690 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:13.690 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:13.690 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:13.690 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:13.690 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:13.690 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:13.690 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:13.690 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:13.690 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:13.690 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:13.690 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:13.690 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:13.690 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:13.690 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:13.690 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:13.690 03:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:13.953 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:13.953 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:13.953 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:13.953 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:13.953 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:13.953 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:13.953 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:13.953 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:13.953 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:13.953 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:13.953 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:13.953 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:13.953 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:13.953 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:13.953 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:13.953 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:13.953 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:13.953 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:13.953 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:13.953 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:13.953 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:13.953 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:13.953 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:13.953 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:14.214 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:14.214 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:14.214 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:14.214 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:14.214 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:14.214 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:14.214 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:14.214 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:14.472 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:14.472 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:14.472 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:14.472 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:14.472 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:14.472 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:14.472 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:14.472 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:14.472 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:14.472 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:14.472 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:14.472 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:14.472 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:14.472 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:14.472 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:14.472 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:14.472 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:14.472 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:14.472 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:14.472 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:14.472 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:14.472 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:14.472 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:14.472 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:14.731 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:14.731 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:14.731 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:14.731 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:14.731 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:14.731 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:14.731 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:14.731 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:14.731 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:14.732 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:14.732 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:14.732 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:14.732 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:14.732 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:14.732 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:14.732 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:14.732 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:14.732 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:14.732 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:14.732 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:14.732 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:14.732 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:14.732 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:14.732 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:14.732 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:39:14.732 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:39:14.732 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:14.732 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:39:14.732 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:14.732 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:39:14.732 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:14.732 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:14.732 rmmod nvme_tcp 00:39:14.991 rmmod nvme_fabrics 00:39:14.991 rmmod nvme_keyring 00:39:14.991 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:14.991 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:39:14.991 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:39:14.991 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2927692 ']' 00:39:14.991 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2927692 00:39:14.991 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2927692 ']' 00:39:14.991 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2927692 00:39:14.991 03:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:39:14.991 03:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:14.991 03:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2927692 00:39:14.991 03:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:14.991 03:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:14.991 03:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2927692' 00:39:14.991 killing process with pid 2927692 00:39:14.991 03:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2927692 00:39:14.991 03:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2927692 00:39:16.385 03:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:16.385 03:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:16.385 03:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:16.385 03:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:39:16.385 03:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:39:16.385 03:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:16.385 03:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:39:16.385 03:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:16.385 03:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:16.385 03:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:16.385 03:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:16.385 03:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:18.290 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:18.290 00:39:18.290 real 0m48.549s 00:39:18.290 user 3m3.278s 00:39:18.290 sys 0m21.386s 00:39:18.290 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:18.290 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:18.290 ************************************ 00:39:18.290 END TEST nvmf_ns_hotplug_stress 00:39:18.290 ************************************ 00:39:18.290 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:39:18.290 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:18.290 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:18.290 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:18.290 ************************************ 00:39:18.290 START TEST nvmf_delete_subsystem 00:39:18.290 ************************************ 00:39:18.290 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:39:18.290 * Looking for test storage... 00:39:18.290 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:18.290 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:18.290 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:39:18.290 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:18.549 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:18.549 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:18.549 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:18.549 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:18.549 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:39:18.549 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:39:18.549 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:39:18.549 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:39:18.549 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:39:18.549 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:39:18.549 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:39:18.549 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:18.549 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:39:18.549 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:39:18.549 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:18.549 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:18.549 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:39:18.549 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:39:18.549 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:18.549 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:39:18.549 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:39:18.549 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:39:18.549 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:39:18.549 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:18.549 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:39:18.549 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:39:18.549 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:18.549 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:18.549 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:39:18.549 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:18.549 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:18.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:18.549 --rc genhtml_branch_coverage=1 00:39:18.549 --rc genhtml_function_coverage=1 00:39:18.549 --rc genhtml_legend=1 00:39:18.549 --rc geninfo_all_blocks=1 00:39:18.549 --rc geninfo_unexecuted_blocks=1 00:39:18.549 00:39:18.549 ' 00:39:18.549 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:18.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:18.549 --rc genhtml_branch_coverage=1 00:39:18.549 --rc genhtml_function_coverage=1 00:39:18.549 --rc genhtml_legend=1 00:39:18.549 --rc geninfo_all_blocks=1 00:39:18.549 --rc geninfo_unexecuted_blocks=1 00:39:18.549 00:39:18.549 ' 00:39:18.549 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:18.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:18.549 --rc genhtml_branch_coverage=1 00:39:18.549 --rc genhtml_function_coverage=1 00:39:18.549 --rc genhtml_legend=1 00:39:18.549 --rc geninfo_all_blocks=1 00:39:18.549 --rc geninfo_unexecuted_blocks=1 00:39:18.549 00:39:18.549 ' 00:39:18.549 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:18.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:18.549 --rc genhtml_branch_coverage=1 00:39:18.549 --rc genhtml_function_coverage=1 00:39:18.549 --rc genhtml_legend=1 00:39:18.549 --rc geninfo_all_blocks=1 00:39:18.549 --rc geninfo_unexecuted_blocks=1 00:39:18.549 00:39:18.549 ' 00:39:18.549 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:39:18.550 03:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:23.825 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:23.825 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:23.825 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:23.826 Found net devices under 0000:af:00.0: cvl_0_0 00:39:23.826 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:23.826 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:23.826 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:23.826 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:23.826 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:23.826 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:23.826 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:23.826 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:23.826 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:23.826 Found net devices under 0000:af:00.1: cvl_0_1 00:39:23.826 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:23.826 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:23.826 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:39:23.826 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:23.826 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:23.826 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:23.826 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:23.826 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:23.826 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:23.826 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:23.826 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:23.826 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:23.826 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:23.826 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:23.826 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:23.826 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:23.826 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:23.826 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:23.826 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:23.826 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:23.826 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:23.826 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:23.826 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:23.826 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:23.826 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:23.826 03:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:23.826 03:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:23.826 03:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:23.826 03:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:23.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:23.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:39:23.826 00:39:23.826 --- 10.0.0.2 ping statistics --- 00:39:23.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:23.826 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:39:23.826 03:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:23.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:23.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:39:23.826 00:39:23.826 --- 10.0.0.1 ping statistics --- 00:39:23.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:23.826 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:39:23.826 03:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:23.826 03:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:39:23.826 03:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:23.826 03:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:23.826 03:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:23.826 03:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:23.826 03:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:23.826 03:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:23.826 03:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:24.084 03:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:39:24.084 03:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:24.084 03:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:24.084 03:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:24.084 03:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2937677 00:39:24.084 03:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2937677 00:39:24.084 03:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:39:24.084 03:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2937677 ']' 00:39:24.084 03:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:24.084 03:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:24.084 03:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:24.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:24.084 03:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:24.084 03:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:24.084 [2024-12-13 03:50:25.148479] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:24.084 [2024-12-13 03:50:25.150567] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:39:24.084 [2024-12-13 03:50:25.150635] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:24.084 [2024-12-13 03:50:25.269849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:24.342 [2024-12-13 03:50:25.378240] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:24.342 [2024-12-13 03:50:25.378280] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:24.342 [2024-12-13 03:50:25.378291] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:24.342 [2024-12-13 03:50:25.378299] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:24.342 [2024-12-13 03:50:25.378312] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:24.342 [2024-12-13 03:50:25.380330] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:24.342 [2024-12-13 03:50:25.380341] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:24.600 [2024-12-13 03:50:25.700781] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:24.600 [2024-12-13 03:50:25.701248] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:24.601 [2024-12-13 03:50:25.701460] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:24.859 03:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:24.859 03:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:39:24.859 03:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:24.859 03:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:24.859 03:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:24.859 03:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:24.859 03:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:24.859 03:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:24.859 03:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:24.859 [2024-12-13 03:50:25.993356] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:24.859 03:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:24.859 03:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:24.859 03:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:24.859 03:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:24.859 03:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:24.859 03:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:24.859 03:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:24.859 03:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:24.859 [2024-12-13 03:50:26.021705] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:24.859 03:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:24.859 03:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:39:24.859 03:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:24.859 03:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:24.859 NULL1 00:39:24.859 03:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:24.859 03:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:24.860 03:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:24.860 03:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:24.860 Delay0 00:39:24.860 03:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:24.860 03:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:24.860 03:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:24.860 03:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:25.118 03:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:25.118 03:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2937744 00:39:25.118 03:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:39:25.118 03:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:39:25.118 [2024-12-13 03:50:26.159808] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:27.021 03:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:27.021 03:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.021 03:50:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:27.280 Write completed with error (sct=0, sc=8) 00:39:27.280 starting I/O failed: -6 00:39:27.280 Write completed with error (sct=0, sc=8) 00:39:27.280 Read completed with error (sct=0, sc=8) 00:39:27.280 Read completed with error (sct=0, sc=8) 00:39:27.280 Write completed with error (sct=0, sc=8) 00:39:27.280 starting I/O failed: -6 00:39:27.280 Read completed with error (sct=0, sc=8) 00:39:27.280 Read completed with error (sct=0, sc=8) 00:39:27.280 Read completed with error (sct=0, sc=8) 00:39:27.280 Read completed with error (sct=0, sc=8) 00:39:27.280 starting I/O failed: -6 00:39:27.280 Write completed with error (sct=0, sc=8) 00:39:27.280 Read completed with error (sct=0, sc=8) 00:39:27.280 Write completed with error (sct=0, sc=8) 00:39:27.280 Write completed with error (sct=0, sc=8) 00:39:27.280 starting I/O failed: -6 00:39:27.280 Write completed with error (sct=0, sc=8) 00:39:27.280 Read completed with error (sct=0, sc=8) 00:39:27.280 Write completed with error (sct=0, sc=8) 00:39:27.280 Write completed with error (sct=0, sc=8) 00:39:27.280 starting I/O failed: -6 00:39:27.280 Read completed with error (sct=0, sc=8) 00:39:27.280 Write completed with error (sct=0, sc=8) 00:39:27.280 Write completed with error (sct=0, sc=8) 00:39:27.280 Read completed with error (sct=0, sc=8) 00:39:27.280 starting I/O failed: -6 00:39:27.280 Read completed with error (sct=0, sc=8) 00:39:27.280 Read completed with error (sct=0, sc=8) 00:39:27.280 Read completed with error (sct=0, sc=8) 00:39:27.280 Read completed with error (sct=0, sc=8) 00:39:27.280 starting I/O failed: -6 00:39:27.280 Write completed with error (sct=0, sc=8) 00:39:27.280 Read completed with error (sct=0, sc=8) 00:39:27.280 Read completed with error (sct=0, sc=8) 00:39:27.280 Read completed with error (sct=0, sc=8) 00:39:27.280 starting I/O failed: -6 00:39:27.280 Read completed with error (sct=0, sc=8) 00:39:27.280 Write completed with error (sct=0, sc=8) 00:39:27.280 Read completed with error (sct=0, sc=8) 00:39:27.280 Read completed with error (sct=0, sc=8) 00:39:27.280 starting I/O failed: -6 00:39:27.280 Read completed with error (sct=0, sc=8) 00:39:27.280 Read completed with error (sct=0, sc=8) 00:39:27.280 Read completed with error (sct=0, sc=8) 00:39:27.280 Read completed with error (sct=0, sc=8) 00:39:27.280 starting I/O failed: -6 00:39:27.280 Write completed with error (sct=0, sc=8) 00:39:27.280 Read completed with error (sct=0, sc=8) 00:39:27.280 Read completed with error (sct=0, sc=8) 00:39:27.280 Read completed with error (sct=0, sc=8) 00:39:27.280 Read completed with error (sct=0, sc=8) 00:39:27.280 Read completed with error (sct=0, sc=8) 00:39:27.280 Read completed with error (sct=0, sc=8) 00:39:27.280 Read completed with error (sct=0, sc=8) 00:39:27.280 Write completed with error (sct=0, sc=8) 00:39:27.280 Read completed with error (sct=0, sc=8) 00:39:27.280 Read completed with error (sct=0, sc=8) 00:39:27.280 Read completed with error (sct=0, sc=8) 00:39:27.280 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 [2024-12-13 03:50:28.368778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(6) to be set 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 [2024-12-13 03:50:28.370188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020600 is same with the state(6) to be set 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 starting I/O failed: -6 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 starting I/O failed: -6 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 starting I/O failed: -6 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 starting I/O failed: -6 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 starting I/O failed: -6 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 starting I/O failed: -6 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 starting I/O failed: -6 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 starting I/O failed: -6 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 starting I/O failed: -6 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 starting I/O failed: -6 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 starting I/O failed: -6 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 starting I/O failed: -6 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 starting I/O failed: -6 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 Read completed with error (sct=0, sc=8) 00:39:27.281 starting I/O failed: -6 00:39:27.281 Write completed with error (sct=0, sc=8) 00:39:27.281 [2024-12-13 03:50:28.371109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001ed00 is same with the state(6) to be set 00:39:28.249 [2024-12-13 03:50:29.338302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001e080 is same with the state(6) to be set 00:39:28.249 Write completed with error (sct=0, sc=8) 00:39:28.249 Read completed with error (sct=0, sc=8) 00:39:28.249 Read completed with error (sct=0, sc=8) 00:39:28.249 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Write completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Write completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Write completed with error (sct=0, sc=8) 00:39:28.250 Write completed with error (sct=0, sc=8) 00:39:28.250 Write completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Write completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Write completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Write completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Write completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Write completed with error (sct=0, sc=8) 00:39:28.250 [2024-12-13 03:50:29.369726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001e800 is same with the state(6) to be set 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Write completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Write completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Write completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Write completed with error (sct=0, sc=8) 00:39:28.250 Write completed with error (sct=0, sc=8) 00:39:28.250 Write completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 [2024-12-13 03:50:29.370482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001ea80 is same with the state(6) to be set 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Write completed with error (sct=0, sc=8) 00:39:28.250 Write completed with error (sct=0, sc=8) 00:39:28.250 Write completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Write completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Write completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Write completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Write completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Write completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 [2024-12-13 03:50:29.371250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001ef80 is same with the state(6) to be set 00:39:28.250 03:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.250 03:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:39:28.250 03:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2937744 00:39:28.250 03:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:39:28.250 Write completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Write completed with error (sct=0, sc=8) 00:39:28.250 Write completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Write completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Write completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 Read completed with error (sct=0, sc=8) 00:39:28.250 [2024-12-13 03:50:29.380558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020380 is same with the state(6) to be set 00:39:28.250 Initializing NVMe Controllers 00:39:28.250 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:28.250 Controller IO queue size 128, less than required. 00:39:28.250 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:28.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:28.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:28.250 Initialization complete. Launching workers. 00:39:28.250 ======================================================== 00:39:28.250 Latency(us) 00:39:28.250 Device Information : IOPS MiB/s Average min max 00:39:28.250 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 195.02 0.10 944435.90 4127.42 1016065.09 00:39:28.250 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.41 0.08 868773.98 454.45 1013033.44 00:39:28.250 ======================================================== 00:39:28.250 Total : 352.43 0.17 910643.08 454.45 1016065.09 00:39:28.250 00:39:28.250 [2024-12-13 03:50:29.382398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500001e080 (9): Bad file descriptor 00:39:28.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:39:28.819 03:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:39:28.819 03:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2937744 00:39:28.819 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2937744) - No such process 00:39:28.819 03:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2937744 00:39:28.819 03:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:39:28.819 03:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2937744 00:39:28.819 03:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:39:28.819 03:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:28.819 03:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:39:28.819 03:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:28.819 03:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2937744 00:39:28.819 03:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:39:28.819 03:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:28.819 03:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:28.819 03:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:28.819 03:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:28.819 03:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.819 03:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:28.819 03:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.819 03:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:28.819 03:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.819 03:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:28.819 [2024-12-13 03:50:29.905741] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:28.819 03:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.819 03:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:28.819 03:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.819 03:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:28.819 03:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.819 03:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2938411 00:39:28.819 03:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:39:28.819 03:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:39:28.819 03:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2938411 00:39:28.819 03:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:28.819 [2024-12-13 03:50:30.015301] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:29.386 03:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:29.386 03:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2938411 00:39:29.386 03:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:29.953 03:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:29.953 03:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2938411 00:39:29.953 03:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:30.521 03:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:30.521 03:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2938411 00:39:30.521 03:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:30.779 03:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:30.779 03:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2938411 00:39:30.779 03:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:31.346 03:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:31.346 03:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2938411 00:39:31.346 03:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:31.913 03:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:31.913 03:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2938411 00:39:31.913 03:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:32.173 Initializing NVMe Controllers 00:39:32.173 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:32.173 Controller IO queue size 128, less than required. 00:39:32.173 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:32.173 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:32.173 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:32.173 Initialization complete. Launching workers. 00:39:32.173 ======================================================== 00:39:32.173 Latency(us) 00:39:32.173 Device Information : IOPS MiB/s Average min max 00:39:32.173 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003540.17 1000222.35 1042917.25 00:39:32.173 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005743.77 1000505.81 1014272.41 00:39:32.173 ======================================================== 00:39:32.173 Total : 256.00 0.12 1004641.97 1000222.35 1042917.25 00:39:32.173 00:39:32.432 03:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:32.432 03:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2938411 00:39:32.432 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2938411) - No such process 00:39:32.432 03:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2938411 00:39:32.432 03:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:39:32.432 03:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:39:32.432 03:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:32.432 03:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:39:32.432 03:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:32.432 03:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:39:32.432 03:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:32.432 03:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:32.432 rmmod nvme_tcp 00:39:32.432 rmmod nvme_fabrics 00:39:32.432 rmmod nvme_keyring 00:39:32.432 03:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:32.432 03:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:39:32.432 03:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:39:32.432 03:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2937677 ']' 00:39:32.432 03:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2937677 00:39:32.432 03:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2937677 ']' 00:39:32.432 03:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2937677 00:39:32.432 03:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:39:32.432 03:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:32.432 03:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2937677 00:39:32.432 03:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:32.432 03:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:32.432 03:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2937677' 00:39:32.432 killing process with pid 2937677 00:39:32.433 03:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2937677 00:39:32.433 03:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2937677 00:39:33.812 03:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:33.812 03:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:33.812 03:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:33.812 03:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:39:33.812 03:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:39:33.812 03:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:33.812 03:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:39:33.812 03:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:33.812 03:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:33.812 03:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:33.812 03:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:33.812 03:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:35.716 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:35.716 00:39:35.716 real 0m17.387s 00:39:35.716 user 0m27.657s 00:39:35.716 sys 0m5.998s 00:39:35.716 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:35.716 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:35.716 ************************************ 00:39:35.716 END TEST nvmf_delete_subsystem 00:39:35.716 ************************************ 00:39:35.716 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:39:35.716 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:35.716 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:35.716 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:35.716 ************************************ 00:39:35.716 START TEST nvmf_host_management 00:39:35.716 ************************************ 00:39:35.716 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:39:35.716 * Looking for test storage... 00:39:35.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:35.716 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:35.716 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:39:35.716 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:35.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:35.976 --rc genhtml_branch_coverage=1 00:39:35.976 --rc genhtml_function_coverage=1 00:39:35.976 --rc genhtml_legend=1 00:39:35.976 --rc geninfo_all_blocks=1 00:39:35.976 --rc geninfo_unexecuted_blocks=1 00:39:35.976 00:39:35.976 ' 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:35.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:35.976 --rc genhtml_branch_coverage=1 00:39:35.976 --rc genhtml_function_coverage=1 00:39:35.976 --rc genhtml_legend=1 00:39:35.976 --rc geninfo_all_blocks=1 00:39:35.976 --rc geninfo_unexecuted_blocks=1 00:39:35.976 00:39:35.976 ' 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:35.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:35.976 --rc genhtml_branch_coverage=1 00:39:35.976 --rc genhtml_function_coverage=1 00:39:35.976 --rc genhtml_legend=1 00:39:35.976 --rc geninfo_all_blocks=1 00:39:35.976 --rc geninfo_unexecuted_blocks=1 00:39:35.976 00:39:35.976 ' 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:35.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:35.976 --rc genhtml_branch_coverage=1 00:39:35.976 --rc genhtml_function_coverage=1 00:39:35.976 --rc genhtml_legend=1 00:39:35.976 --rc geninfo_all_blocks=1 00:39:35.976 --rc geninfo_unexecuted_blocks=1 00:39:35.976 00:39:35.976 ' 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:35.976 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:35.977 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:35.977 03:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:39:35.977 03:50:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:41.252 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:41.252 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:41.252 Found net devices under 0000:af:00.0: cvl_0_0 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:41.252 Found net devices under 0000:af:00.1: cvl_0_1 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:41.252 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:41.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:41.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:39:41.253 00:39:41.253 --- 10.0.0.2 ping statistics --- 00:39:41.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:41.253 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:41.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:41.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:39:41.253 00:39:41.253 --- 10.0.0.1 ping statistics --- 00:39:41.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:41.253 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2942525 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2942525 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2942525 ']' 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:41.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:41.253 03:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:41.253 [2024-12-13 03:50:42.421626] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:41.253 [2024-12-13 03:50:42.423698] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:39:41.253 [2024-12-13 03:50:42.423763] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:41.512 [2024-12-13 03:50:42.541324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:41.512 [2024-12-13 03:50:42.649585] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:41.512 [2024-12-13 03:50:42.649629] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:41.512 [2024-12-13 03:50:42.649641] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:41.512 [2024-12-13 03:50:42.649650] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:41.512 [2024-12-13 03:50:42.649658] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:41.512 [2024-12-13 03:50:42.651861] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:39:41.512 [2024-12-13 03:50:42.651940] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:39:41.512 [2024-12-13 03:50:42.652059] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:41.512 [2024-12-13 03:50:42.652082] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:39:41.772 [2024-12-13 03:50:42.977106] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:41.772 [2024-12-13 03:50:42.978719] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:42.031 [2024-12-13 03:50:42.980592] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:42.031 [2024-12-13 03:50:42.981402] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:42.031 [2024-12-13 03:50:42.981722] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:42.291 [2024-12-13 03:50:43.284995] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:42.291 Malloc0 00:39:42.291 [2024-12-13 03:50:43.409084] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2942622 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2942622 /var/tmp/bdevperf.sock 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2942622 ']' 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:42.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:42.291 { 00:39:42.291 "params": { 00:39:42.291 "name": "Nvme$subsystem", 00:39:42.291 "trtype": "$TEST_TRANSPORT", 00:39:42.291 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:42.291 "adrfam": "ipv4", 00:39:42.291 "trsvcid": "$NVMF_PORT", 00:39:42.291 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:42.291 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:42.291 "hdgst": ${hdgst:-false}, 00:39:42.291 "ddgst": ${ddgst:-false} 00:39:42.291 }, 00:39:42.291 "method": "bdev_nvme_attach_controller" 00:39:42.291 } 00:39:42.291 EOF 00:39:42.291 )") 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:39:42.291 03:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:42.291 "params": { 00:39:42.291 "name": "Nvme0", 00:39:42.291 "trtype": "tcp", 00:39:42.291 "traddr": "10.0.0.2", 00:39:42.291 "adrfam": "ipv4", 00:39:42.291 "trsvcid": "4420", 00:39:42.291 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:42.291 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:42.291 "hdgst": false, 00:39:42.291 "ddgst": false 00:39:42.291 }, 00:39:42.291 "method": "bdev_nvme_attach_controller" 00:39:42.291 }' 00:39:42.550 [2024-12-13 03:50:43.533152] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:39:42.550 [2024-12-13 03:50:43.533241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2942622 ] 00:39:42.550 [2024-12-13 03:50:43.651107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:42.810 [2024-12-13 03:50:43.766805] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:43.069 Running I/O for 10 seconds... 00:39:43.330 03:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:43.330 03:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:39:43.330 03:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:39:43.330 03:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.330 03:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:43.330 03:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.331 03:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:43.331 03:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:39:43.331 03:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:39:43.331 03:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:39:43.331 03:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:39:43.331 03:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:39:43.331 03:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:39:43.331 03:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:39:43.331 03:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:39:43.331 03:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.331 03:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:39:43.331 03:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:43.331 03:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.331 03:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=323 00:39:43.331 03:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 323 -ge 100 ']' 00:39:43.331 03:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:39:43.331 03:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:39:43.331 03:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:39:43.331 03:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:39:43.331 03:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.331 03:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:43.331 [2024-12-13 03:50:44.439431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.439992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.440001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.440009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:43.331 [2024-12-13 03:50:44.440113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.331 [2024-12-13 03:50:44.440155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.440989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.440999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.441011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.441020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.332 [2024-12-13 03:50:44.441031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.332 [2024-12-13 03:50:44.441041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.333 [2024-12-13 03:50:44.441053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.333 [2024-12-13 03:50:44.441063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.333 [2024-12-13 03:50:44.441074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.333 [2024-12-13 03:50:44.441083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.333 [2024-12-13 03:50:44.441094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.333 [2024-12-13 03:50:44.441104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.333 [2024-12-13 03:50:44.441115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.333 [2024-12-13 03:50:44.441125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.333 [2024-12-13 03:50:44.441136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.333 [2024-12-13 03:50:44.441145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.333 [2024-12-13 03:50:44.441156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.333 [2024-12-13 03:50:44.441166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.333 [2024-12-13 03:50:44.441177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.333 [2024-12-13 03:50:44.441186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.333 [2024-12-13 03:50:44.441197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.333 [2024-12-13 03:50:44.441209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.333 [2024-12-13 03:50:44.441221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.333 [2024-12-13 03:50:44.441231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.333 [2024-12-13 03:50:44.441242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.333 [2024-12-13 03:50:44.441251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.333 [2024-12-13 03:50:44.441262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.333 [2024-12-13 03:50:44.441272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.333 [2024-12-13 03:50:44.441283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.333 [2024-12-13 03:50:44.441293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.333 [2024-12-13 03:50:44.441305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.333 [2024-12-13 03:50:44.441315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.333 [2024-12-13 03:50:44.441326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.333 [2024-12-13 03:50:44.441336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.333 [2024-12-13 03:50:44.441347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.333 [2024-12-13 03:50:44.441356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.333 [2024-12-13 03:50:44.441367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.333 [2024-12-13 03:50:44.441377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.333 [2024-12-13 03:50:44.441388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.333 [2024-12-13 03:50:44.441399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.333 [2024-12-13 03:50:44.441410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.333 [2024-12-13 03:50:44.441420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.333 [2024-12-13 03:50:44.441431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.333 [2024-12-13 03:50:44.441440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.333 [2024-12-13 03:50:44.441451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.333 [2024-12-13 03:50:44.441460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.333 [2024-12-13 03:50:44.441474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.333 [2024-12-13 03:50:44.441483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.333 03:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.333 [2024-12-13 03:50:44.441494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.333 [2024-12-13 03:50:44.441504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.333 [2024-12-13 03:50:44.441515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.333 [2024-12-13 03:50:44.441525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:43.333 [2024-12-13 03:50:44.441535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000326480 is same with the state(6) to be set 00:39:43.333 03:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:39:43.333 03:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:43.333 03:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:43.333 [2024-12-13 03:50:44.442859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:39:43.333 task offset: 49152 on job bdev=Nvme0n1 fails 00:39:43.333 00:39:43.333 Latency(us) 00:39:43.333 [2024-12-13T02:50:44.542Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:43.333 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:43.333 Job: Nvme0n1 ended in about 0.25 seconds with error 00:39:43.333 Verification LBA range: start 0x0 length 0x400 00:39:43.333 Nvme0n1 : 0.25 1530.87 95.68 255.15 0.00 34409.64 5086.84 31207.62 00:39:43.333 [2024-12-13T02:50:44.542Z] =================================================================================================================== 00:39:43.333 [2024-12-13T02:50:44.542Z] Total : 1530.87 95.68 255.15 0.00 34409.64 5086.84 31207.62 00:39:43.333 03:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:43.333 03:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:39:43.333 [2024-12-13 03:50:44.458840] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:43.333 [2024-12-13 03:50:44.458884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:39:43.593 [2024-12-13 03:50:44.550149] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:39:44.532 03:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2942622 00:39:44.532 03:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:39:44.532 03:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:39:44.532 03:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:39:44.532 03:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:39:44.532 03:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:39:44.532 03:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:44.532 03:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:44.532 { 00:39:44.532 "params": { 00:39:44.532 "name": "Nvme$subsystem", 00:39:44.532 "trtype": "$TEST_TRANSPORT", 00:39:44.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:44.532 "adrfam": "ipv4", 00:39:44.532 "trsvcid": "$NVMF_PORT", 00:39:44.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:44.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:44.532 "hdgst": ${hdgst:-false}, 00:39:44.532 "ddgst": ${ddgst:-false} 00:39:44.532 }, 00:39:44.532 "method": "bdev_nvme_attach_controller" 00:39:44.532 } 00:39:44.532 EOF 00:39:44.532 )") 00:39:44.532 03:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:39:44.532 03:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:39:44.532 03:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:39:44.532 03:50:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:44.532 "params": { 00:39:44.532 "name": "Nvme0", 00:39:44.532 "trtype": "tcp", 00:39:44.532 "traddr": "10.0.0.2", 00:39:44.532 "adrfam": "ipv4", 00:39:44.532 "trsvcid": "4420", 00:39:44.532 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:44.532 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:44.532 "hdgst": false, 00:39:44.532 "ddgst": false 00:39:44.532 }, 00:39:44.532 "method": "bdev_nvme_attach_controller" 00:39:44.532 }' 00:39:44.532 [2024-12-13 03:50:45.535435] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:39:44.532 [2024-12-13 03:50:45.535546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2943046 ] 00:39:44.532 [2024-12-13 03:50:45.646791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:44.792 [2024-12-13 03:50:45.757850] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:45.362 Running I/O for 1 seconds... 00:39:46.302 1763.00 IOPS, 110.19 MiB/s 00:39:46.302 Latency(us) 00:39:46.302 [2024-12-13T02:50:47.511Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:46.302 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:46.302 Verification LBA range: start 0x0 length 0x400 00:39:46.302 Nvme0n1 : 1.02 1788.15 111.76 0.00 0.00 35070.30 2855.50 30833.13 00:39:46.302 [2024-12-13T02:50:47.511Z] =================================================================================================================== 00:39:46.302 [2024-12-13T02:50:47.511Z] Total : 1788.15 111.76 0.00 0.00 35070.30 2855.50 30833.13 00:39:47.242 03:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:39:47.242 03:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:39:47.242 03:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:39:47.242 03:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:39:47.242 03:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:39:47.242 03:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:47.242 03:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:39:47.243 03:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:47.243 03:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:39:47.243 03:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:47.243 03:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:47.243 rmmod nvme_tcp 00:39:47.243 rmmod nvme_fabrics 00:39:47.243 rmmod nvme_keyring 00:39:47.243 03:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:47.243 03:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:39:47.243 03:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:39:47.243 03:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2942525 ']' 00:39:47.243 03:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2942525 00:39:47.243 03:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2942525 ']' 00:39:47.243 03:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2942525 00:39:47.243 03:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:39:47.243 03:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:47.243 03:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2942525 00:39:47.243 03:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:47.243 03:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:47.243 03:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2942525' 00:39:47.243 killing process with pid 2942525 00:39:47.243 03:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2942525 00:39:47.243 03:50:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2942525 00:39:48.624 [2024-12-13 03:50:49.578094] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:39:48.624 03:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:48.624 03:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:48.624 03:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:48.624 03:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:39:48.624 03:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:48.624 03:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:39:48.624 03:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:39:48.624 03:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:48.624 03:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:48.624 03:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:48.624 03:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:48.624 03:50:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:50.534 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:50.534 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:39:50.534 00:39:50.534 real 0m14.892s 00:39:50.534 user 0m26.776s 00:39:50.534 sys 0m6.394s 00:39:50.534 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:50.534 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:50.534 ************************************ 00:39:50.534 END TEST nvmf_host_management 00:39:50.534 ************************************ 00:39:50.794 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:39:50.794 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:50.794 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:50.794 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:50.794 ************************************ 00:39:50.794 START TEST nvmf_lvol 00:39:50.794 ************************************ 00:39:50.794 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:39:50.794 * Looking for test storage... 00:39:50.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:50.794 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:50.794 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:39:50.794 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:50.794 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:50.794 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:50.794 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:50.794 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:50.794 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:39:50.794 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:39:50.794 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:39:50.794 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:39:50.794 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:39:50.794 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:39:50.794 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:39:50.794 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:50.794 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:39:50.794 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:39:50.794 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:50.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:50.795 --rc genhtml_branch_coverage=1 00:39:50.795 --rc genhtml_function_coverage=1 00:39:50.795 --rc genhtml_legend=1 00:39:50.795 --rc geninfo_all_blocks=1 00:39:50.795 --rc geninfo_unexecuted_blocks=1 00:39:50.795 00:39:50.795 ' 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:50.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:50.795 --rc genhtml_branch_coverage=1 00:39:50.795 --rc genhtml_function_coverage=1 00:39:50.795 --rc genhtml_legend=1 00:39:50.795 --rc geninfo_all_blocks=1 00:39:50.795 --rc geninfo_unexecuted_blocks=1 00:39:50.795 00:39:50.795 ' 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:50.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:50.795 --rc genhtml_branch_coverage=1 00:39:50.795 --rc genhtml_function_coverage=1 00:39:50.795 --rc genhtml_legend=1 00:39:50.795 --rc geninfo_all_blocks=1 00:39:50.795 --rc geninfo_unexecuted_blocks=1 00:39:50.795 00:39:50.795 ' 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:50.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:50.795 --rc genhtml_branch_coverage=1 00:39:50.795 --rc genhtml_function_coverage=1 00:39:50.795 --rc genhtml_legend=1 00:39:50.795 --rc geninfo_all_blocks=1 00:39:50.795 --rc geninfo_unexecuted_blocks=1 00:39:50.795 00:39:50.795 ' 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:39:50.795 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:50.796 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:50.796 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:50.796 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:50.796 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:50.796 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:50.796 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:50.796 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:50.796 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:50.796 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:50.796 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:39:50.796 03:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:56.072 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:56.072 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:56.072 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:56.072 Found net devices under 0000:af:00.0: cvl_0_0 00:39:56.073 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:56.073 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:56.073 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:56.073 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:56.073 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:56.073 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:56.073 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:56.073 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:56.073 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:56.073 Found net devices under 0000:af:00.1: cvl_0_1 00:39:56.073 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:56.073 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:56.073 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:39:56.073 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:56.073 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:56.073 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:56.073 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:56.073 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:56.073 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:56.073 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:56.073 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:56.073 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:56.073 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:56.073 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:56.073 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:56.073 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:56.073 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:56.073 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:56.073 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:56.073 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:56.073 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:56.333 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:56.333 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:56.333 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:56.333 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:56.333 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:56.333 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:56.333 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:56.333 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:56.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:56.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:39:56.333 00:39:56.333 --- 10.0.0.2 ping statistics --- 00:39:56.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:56.333 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:39:56.333 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:56.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:56.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:39:56.333 00:39:56.333 --- 10.0.0.1 ping statistics --- 00:39:56.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:56.333 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:39:56.333 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:56.333 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:39:56.333 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:56.333 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:56.333 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:56.333 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:56.333 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:56.333 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:56.333 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:56.333 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:39:56.333 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:56.333 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:56.333 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:56.333 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2946966 00:39:56.333 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2946966 00:39:56.333 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:39:56.333 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2946966 ']' 00:39:56.333 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:56.333 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:56.333 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:56.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:56.333 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:56.333 03:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:56.594 [2024-12-13 03:50:57.598645] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:56.594 [2024-12-13 03:50:57.600716] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:39:56.594 [2024-12-13 03:50:57.600799] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:56.594 [2024-12-13 03:50:57.717094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:56.854 [2024-12-13 03:50:57.829888] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:56.854 [2024-12-13 03:50:57.829936] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:56.854 [2024-12-13 03:50:57.829964] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:56.854 [2024-12-13 03:50:57.829974] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:56.854 [2024-12-13 03:50:57.829984] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:56.854 [2024-12-13 03:50:57.832034] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:56.854 [2024-12-13 03:50:57.832042] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:56.854 [2024-12-13 03:50:57.832052] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:39:57.113 [2024-12-13 03:50:58.143116] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:57.113 [2024-12-13 03:50:58.144110] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:57.113 [2024-12-13 03:50:58.144946] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:57.113 [2024-12-13 03:50:58.145156] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:57.372 03:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:57.372 03:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:39:57.372 03:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:57.372 03:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:57.372 03:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:57.372 03:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:57.372 03:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:57.694 [2024-12-13 03:50:58.621043] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:57.694 03:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:57.988 03:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:39:57.988 03:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:58.273 03:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:39:58.273 03:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:39:58.273 03:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:39:58.532 03:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b1165c5d-ceba-482b-a651-a78365582b14 00:39:58.532 03:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b1165c5d-ceba-482b-a651-a78365582b14 lvol 20 00:39:58.791 03:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=50399800-b515-4ac7-adf2-61c8240af23e 00:39:58.791 03:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:58.791 03:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 50399800-b515-4ac7-adf2-61c8240af23e 00:39:59.050 03:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:59.309 [2024-12-13 03:51:00.336953] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:59.309 03:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:59.569 03:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2947493 00:39:59.569 03:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:39:59.569 03:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:40:00.506 03:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 50399800-b515-4ac7-adf2-61c8240af23e MY_SNAPSHOT 00:40:00.766 03:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=48ab7010-dcb6-45f8-a7f4-b6aa86ab37c9 00:40:00.766 03:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 50399800-b515-4ac7-adf2-61c8240af23e 30 00:40:01.025 03:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 48ab7010-dcb6-45f8-a7f4-b6aa86ab37c9 MY_CLONE 00:40:01.284 03:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=71d021e1-ab52-49d2-bc8d-c800d1089cb3 00:40:01.284 03:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 71d021e1-ab52-49d2-bc8d-c800d1089cb3 00:40:01.853 03:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2947493 00:40:09.976 Initializing NVMe Controllers 00:40:09.976 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:40:09.976 Controller IO queue size 128, less than required. 00:40:09.976 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:09.976 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:40:09.976 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:40:09.976 Initialization complete. Launching workers. 00:40:09.976 ======================================================== 00:40:09.976 Latency(us) 00:40:09.976 Device Information : IOPS MiB/s Average min max 00:40:09.976 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11383.70 44.47 11244.25 934.92 136357.30 00:40:09.976 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11067.50 43.23 11563.00 3373.71 147534.00 00:40:09.976 ======================================================== 00:40:09.976 Total : 22451.20 87.70 11401.38 934.92 147534.00 00:40:09.976 00:40:09.976 03:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:09.976 03:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 50399800-b515-4ac7-adf2-61c8240af23e 00:40:10.235 03:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b1165c5d-ceba-482b-a651-a78365582b14 00:40:10.493 03:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:40:10.494 03:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:40:10.494 03:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:40:10.494 03:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:10.494 03:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:40:10.494 03:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:10.494 03:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:40:10.494 03:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:10.494 03:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:10.494 rmmod nvme_tcp 00:40:10.494 rmmod nvme_fabrics 00:40:10.494 rmmod nvme_keyring 00:40:10.494 03:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:10.494 03:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:40:10.494 03:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:40:10.494 03:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2946966 ']' 00:40:10.494 03:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2946966 00:40:10.494 03:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2946966 ']' 00:40:10.494 03:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2946966 00:40:10.494 03:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:40:10.494 03:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:10.494 03:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2946966 00:40:10.494 03:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:10.494 03:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:10.494 03:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2946966' 00:40:10.494 killing process with pid 2946966 00:40:10.494 03:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2946966 00:40:10.494 03:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2946966 00:40:12.408 03:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:12.408 03:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:12.408 03:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:12.408 03:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:40:12.409 03:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:40:12.409 03:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:12.409 03:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:40:12.409 03:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:12.409 03:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:12.409 03:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:12.409 03:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:12.409 03:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:14.316 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:14.316 00:40:14.316 real 0m23.440s 00:40:14.316 user 0m57.389s 00:40:14.316 sys 0m9.419s 00:40:14.316 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:14.316 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:14.316 ************************************ 00:40:14.316 END TEST nvmf_lvol 00:40:14.316 ************************************ 00:40:14.316 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:40:14.316 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:14.316 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:14.316 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:14.316 ************************************ 00:40:14.316 START TEST nvmf_lvs_grow 00:40:14.316 ************************************ 00:40:14.316 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:40:14.316 * Looking for test storage... 00:40:14.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:14.316 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:14.316 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:40:14.316 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:14.316 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:14.316 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:14.316 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:14.316 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:14.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:14.317 --rc genhtml_branch_coverage=1 00:40:14.317 --rc genhtml_function_coverage=1 00:40:14.317 --rc genhtml_legend=1 00:40:14.317 --rc geninfo_all_blocks=1 00:40:14.317 --rc geninfo_unexecuted_blocks=1 00:40:14.317 00:40:14.317 ' 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:14.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:14.317 --rc genhtml_branch_coverage=1 00:40:14.317 --rc genhtml_function_coverage=1 00:40:14.317 --rc genhtml_legend=1 00:40:14.317 --rc geninfo_all_blocks=1 00:40:14.317 --rc geninfo_unexecuted_blocks=1 00:40:14.317 00:40:14.317 ' 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:14.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:14.317 --rc genhtml_branch_coverage=1 00:40:14.317 --rc genhtml_function_coverage=1 00:40:14.317 --rc genhtml_legend=1 00:40:14.317 --rc geninfo_all_blocks=1 00:40:14.317 --rc geninfo_unexecuted_blocks=1 00:40:14.317 00:40:14.317 ' 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:14.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:14.317 --rc genhtml_branch_coverage=1 00:40:14.317 --rc genhtml_function_coverage=1 00:40:14.317 --rc genhtml_legend=1 00:40:14.317 --rc geninfo_all_blocks=1 00:40:14.317 --rc geninfo_unexecuted_blocks=1 00:40:14.317 00:40:14.317 ' 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:14.317 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:14.318 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:14.318 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:14.318 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:14.318 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:14.318 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:40:14.318 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:40:14.318 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:14.318 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:14.318 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:14.318 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:14.318 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:14.318 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:14.318 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:14.318 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:14.318 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:14.318 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:14.318 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:40:14.318 03:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:19.598 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:19.598 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:19.599 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:19.599 Found net devices under 0000:af:00.0: cvl_0_0 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:19.599 Found net devices under 0000:af:00.1: cvl_0_1 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:19.599 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:19.859 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:19.859 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:19.859 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:19.859 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:19.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:19.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:40:19.859 00:40:19.859 --- 10.0.0.2 ping statistics --- 00:40:19.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:19.860 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:40:19.860 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:19.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:19.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:40:19.860 00:40:19.860 --- 10.0.0.1 ping statistics --- 00:40:19.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:19.860 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:40:19.860 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:19.860 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:40:19.860 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:19.860 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:19.860 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:19.860 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:19.860 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:19.860 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:19.860 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:19.860 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:40:19.860 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:19.860 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:19.860 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:19.860 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2953339 00:40:19.860 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2953339 00:40:19.860 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:40:19.860 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2953339 ']' 00:40:19.860 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:19.860 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:19.860 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:19.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:19.860 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:19.860 03:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:19.860 [2024-12-13 03:51:20.999707] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:19.860 [2024-12-13 03:51:21.001818] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:40:19.860 [2024-12-13 03:51:21.001884] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:20.119 [2024-12-13 03:51:21.118979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:20.119 [2024-12-13 03:51:21.226806] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:20.119 [2024-12-13 03:51:21.226848] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:20.119 [2024-12-13 03:51:21.226860] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:20.119 [2024-12-13 03:51:21.226868] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:20.119 [2024-12-13 03:51:21.226878] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:20.119 [2024-12-13 03:51:21.228265] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:20.379 [2024-12-13 03:51:21.532519] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:20.379 [2024-12-13 03:51:21.532783] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:20.638 03:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:20.638 03:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:40:20.638 03:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:20.638 03:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:20.638 03:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:20.638 03:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:20.638 03:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:20.898 [2024-12-13 03:51:22.005038] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:20.898 03:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:40:20.898 03:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:20.898 03:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:20.898 03:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:20.898 ************************************ 00:40:20.898 START TEST lvs_grow_clean 00:40:20.898 ************************************ 00:40:20.898 03:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:40:20.898 03:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:40:20.898 03:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:40:20.898 03:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:40:20.898 03:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:40:20.898 03:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:40:20.898 03:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:40:20.898 03:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:20.898 03:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:20.898 03:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:21.157 03:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:40:21.157 03:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:40:21.416 03:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=be8a47c9-df27-402f-b756-c421872d5c10 00:40:21.416 03:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be8a47c9-df27-402f-b756-c421872d5c10 00:40:21.416 03:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:40:21.675 03:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:40:21.675 03:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:40:21.675 03:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u be8a47c9-df27-402f-b756-c421872d5c10 lvol 150 00:40:21.675 03:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=659a0f85-012a-4725-8ae8-9af38612747b 00:40:21.675 03:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:21.675 03:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:40:21.935 [2024-12-13 03:51:23.024983] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:40:21.935 [2024-12-13 03:51:23.025146] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:40:21.935 true 00:40:21.935 03:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be8a47c9-df27-402f-b756-c421872d5c10 00:40:21.935 03:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:40:22.194 03:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:40:22.194 03:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:22.453 03:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 659a0f85-012a-4725-8ae8-9af38612747b 00:40:22.453 03:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:22.712 [2024-12-13 03:51:23.753350] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:22.713 03:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:22.972 03:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2953911 00:40:22.972 03:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:22.972 03:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:40:22.972 03:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2953911 /var/tmp/bdevperf.sock 00:40:22.972 03:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2953911 ']' 00:40:22.973 03:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:22.973 03:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:22.973 03:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:22.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:22.973 03:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:22.973 03:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:40:22.973 [2024-12-13 03:51:24.025083] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:40:22.973 [2024-12-13 03:51:24.025185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2953911 ] 00:40:22.973 [2024-12-13 03:51:24.136751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:23.233 [2024-12-13 03:51:24.242815] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:23.801 03:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:23.801 03:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:40:23.801 03:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:40:24.059 Nvme0n1 00:40:24.059 03:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:40:24.316 [ 00:40:24.316 { 00:40:24.316 "name": "Nvme0n1", 00:40:24.316 "aliases": [ 00:40:24.316 "659a0f85-012a-4725-8ae8-9af38612747b" 00:40:24.316 ], 00:40:24.316 "product_name": "NVMe disk", 00:40:24.316 "block_size": 4096, 00:40:24.316 "num_blocks": 38912, 00:40:24.316 "uuid": "659a0f85-012a-4725-8ae8-9af38612747b", 00:40:24.316 "numa_id": 1, 00:40:24.316 "assigned_rate_limits": { 00:40:24.316 "rw_ios_per_sec": 0, 00:40:24.316 "rw_mbytes_per_sec": 0, 00:40:24.316 "r_mbytes_per_sec": 0, 00:40:24.316 "w_mbytes_per_sec": 0 00:40:24.316 }, 00:40:24.316 "claimed": false, 00:40:24.316 "zoned": false, 00:40:24.316 "supported_io_types": { 00:40:24.316 "read": true, 00:40:24.316 "write": true, 00:40:24.316 "unmap": true, 00:40:24.316 "flush": true, 00:40:24.316 "reset": true, 00:40:24.316 "nvme_admin": true, 00:40:24.316 "nvme_io": true, 00:40:24.316 "nvme_io_md": false, 00:40:24.316 "write_zeroes": true, 00:40:24.316 "zcopy": false, 00:40:24.316 "get_zone_info": false, 00:40:24.316 "zone_management": false, 00:40:24.316 "zone_append": false, 00:40:24.316 "compare": true, 00:40:24.316 "compare_and_write": true, 00:40:24.317 "abort": true, 00:40:24.317 "seek_hole": false, 00:40:24.317 "seek_data": false, 00:40:24.317 "copy": true, 00:40:24.317 "nvme_iov_md": false 00:40:24.317 }, 00:40:24.317 "memory_domains": [ 00:40:24.317 { 00:40:24.317 "dma_device_id": "system", 00:40:24.317 "dma_device_type": 1 00:40:24.317 } 00:40:24.317 ], 00:40:24.317 "driver_specific": { 00:40:24.317 "nvme": [ 00:40:24.317 { 00:40:24.317 "trid": { 00:40:24.317 "trtype": "TCP", 00:40:24.317 "adrfam": "IPv4", 00:40:24.317 "traddr": "10.0.0.2", 00:40:24.317 "trsvcid": "4420", 00:40:24.317 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:40:24.317 }, 00:40:24.317 "ctrlr_data": { 00:40:24.317 "cntlid": 1, 00:40:24.317 "vendor_id": "0x8086", 00:40:24.317 "model_number": "SPDK bdev Controller", 00:40:24.317 "serial_number": "SPDK0", 00:40:24.317 "firmware_revision": "25.01", 00:40:24.317 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:24.317 "oacs": { 00:40:24.317 "security": 0, 00:40:24.317 "format": 0, 00:40:24.317 "firmware": 0, 00:40:24.317 "ns_manage": 0 00:40:24.317 }, 00:40:24.317 "multi_ctrlr": true, 00:40:24.317 "ana_reporting": false 00:40:24.317 }, 00:40:24.317 "vs": { 00:40:24.317 "nvme_version": "1.3" 00:40:24.317 }, 00:40:24.317 "ns_data": { 00:40:24.317 "id": 1, 00:40:24.317 "can_share": true 00:40:24.317 } 00:40:24.317 } 00:40:24.317 ], 00:40:24.317 "mp_policy": "active_passive" 00:40:24.317 } 00:40:24.317 } 00:40:24.317 ] 00:40:24.317 03:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2954127 00:40:24.317 03:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:40:24.317 03:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:24.317 Running I/O for 10 seconds... 00:40:25.696 Latency(us) 00:40:25.696 [2024-12-13T02:51:26.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:25.696 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:25.696 Nvme0n1 : 1.00 20130.00 78.63 0.00 0.00 0.00 0.00 0.00 00:40:25.696 [2024-12-13T02:51:26.905Z] =================================================================================================================== 00:40:25.696 [2024-12-13T02:51:26.905Z] Total : 20130.00 78.63 0.00 0.00 0.00 0.00 0.00 00:40:25.696 00:40:26.264 03:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u be8a47c9-df27-402f-b756-c421872d5c10 00:40:26.264 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:26.264 Nvme0n1 : 2.00 20288.50 79.25 0.00 0.00 0.00 0.00 0.00 00:40:26.264 [2024-12-13T02:51:27.473Z] =================================================================================================================== 00:40:26.264 [2024-12-13T02:51:27.473Z] Total : 20288.50 79.25 0.00 0.00 0.00 0.00 0.00 00:40:26.264 00:40:26.523 true 00:40:26.523 03:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be8a47c9-df27-402f-b756-c421872d5c10 00:40:26.523 03:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:40:26.781 03:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:40:26.781 03:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:40:26.781 03:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2954127 00:40:27.348 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:27.348 Nvme0n1 : 3.00 20341.33 79.46 0.00 0.00 0.00 0.00 0.00 00:40:27.348 [2024-12-13T02:51:28.557Z] =================================================================================================================== 00:40:27.348 [2024-12-13T02:51:28.557Z] Total : 20341.33 79.46 0.00 0.00 0.00 0.00 0.00 00:40:27.348 00:40:28.285 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:28.285 Nvme0n1 : 4.00 20367.75 79.56 0.00 0.00 0.00 0.00 0.00 00:40:28.285 [2024-12-13T02:51:29.494Z] =================================================================================================================== 00:40:28.285 [2024-12-13T02:51:29.494Z] Total : 20367.75 79.56 0.00 0.00 0.00 0.00 0.00 00:40:28.285 00:40:29.664 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:29.664 Nvme0n1 : 5.00 20332.80 79.42 0.00 0.00 0.00 0.00 0.00 00:40:29.664 [2024-12-13T02:51:30.873Z] =================================================================================================================== 00:40:29.664 [2024-12-13T02:51:30.873Z] Total : 20332.80 79.42 0.00 0.00 0.00 0.00 0.00 00:40:29.664 00:40:30.602 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:30.602 Nvme0n1 : 6.00 20373.00 79.58 0.00 0.00 0.00 0.00 0.00 00:40:30.602 [2024-12-13T02:51:31.811Z] =================================================================================================================== 00:40:30.602 [2024-12-13T02:51:31.811Z] Total : 20373.00 79.58 0.00 0.00 0.00 0.00 0.00 00:40:30.602 00:40:31.540 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:31.540 Nvme0n1 : 7.00 20419.86 79.77 0.00 0.00 0.00 0.00 0.00 00:40:31.540 [2024-12-13T02:51:32.749Z] =================================================================================================================== 00:40:31.540 [2024-12-13T02:51:32.749Z] Total : 20419.86 79.77 0.00 0.00 0.00 0.00 0.00 00:40:31.540 00:40:32.478 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:32.478 Nvme0n1 : 8.00 20455.00 79.90 0.00 0.00 0.00 0.00 0.00 00:40:32.478 [2024-12-13T02:51:33.687Z] =================================================================================================================== 00:40:32.478 [2024-12-13T02:51:33.687Z] Total : 20455.00 79.90 0.00 0.00 0.00 0.00 0.00 00:40:32.478 00:40:33.416 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:33.416 Nvme0n1 : 9.00 20496.44 80.06 0.00 0.00 0.00 0.00 0.00 00:40:33.416 [2024-12-13T02:51:34.625Z] =================================================================================================================== 00:40:33.416 [2024-12-13T02:51:34.625Z] Total : 20496.44 80.06 0.00 0.00 0.00 0.00 0.00 00:40:33.416 00:40:34.353 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:34.353 Nvme0n1 : 10.00 20516.90 80.14 0.00 0.00 0.00 0.00 0.00 00:40:34.353 [2024-12-13T02:51:35.562Z] =================================================================================================================== 00:40:34.353 [2024-12-13T02:51:35.562Z] Total : 20516.90 80.14 0.00 0.00 0.00 0.00 0.00 00:40:34.353 00:40:34.353 00:40:34.353 Latency(us) 00:40:34.353 [2024-12-13T02:51:35.562Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:34.353 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:34.353 Nvme0n1 : 10.00 20521.46 80.16 0.00 0.00 6234.03 4213.03 18724.57 00:40:34.353 [2024-12-13T02:51:35.562Z] =================================================================================================================== 00:40:34.353 [2024-12-13T02:51:35.562Z] Total : 20521.46 80.16 0.00 0.00 6234.03 4213.03 18724.57 00:40:34.353 { 00:40:34.353 "results": [ 00:40:34.353 { 00:40:34.353 "job": "Nvme0n1", 00:40:34.353 "core_mask": "0x2", 00:40:34.353 "workload": "randwrite", 00:40:34.353 "status": "finished", 00:40:34.353 "queue_depth": 128, 00:40:34.353 "io_size": 4096, 00:40:34.353 "runtime": 10.004017, 00:40:34.353 "iops": 20521.45653091153, 00:40:34.353 "mibps": 80.16193957387317, 00:40:34.353 "io_failed": 0, 00:40:34.353 "io_timeout": 0, 00:40:34.353 "avg_latency_us": 6234.0255108406245, 00:40:34.353 "min_latency_us": 4213.028571428571, 00:40:34.353 "max_latency_us": 18724.571428571428 00:40:34.353 } 00:40:34.353 ], 00:40:34.353 "core_count": 1 00:40:34.353 } 00:40:34.353 03:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2953911 00:40:34.353 03:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2953911 ']' 00:40:34.353 03:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2953911 00:40:34.353 03:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:40:34.353 03:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:34.353 03:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2953911 00:40:34.612 03:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:34.612 03:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:34.612 03:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2953911' 00:40:34.612 killing process with pid 2953911 00:40:34.612 03:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2953911 00:40:34.612 Received shutdown signal, test time was about 10.000000 seconds 00:40:34.612 00:40:34.612 Latency(us) 00:40:34.612 [2024-12-13T02:51:35.821Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:34.612 [2024-12-13T02:51:35.821Z] =================================================================================================================== 00:40:34.612 [2024-12-13T02:51:35.821Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:34.612 03:51:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2953911 00:40:35.550 03:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:35.550 03:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:35.810 03:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be8a47c9-df27-402f-b756-c421872d5c10 00:40:35.810 03:51:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:40:36.069 03:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:40:36.069 03:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:40:36.069 03:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:36.069 [2024-12-13 03:51:37.200943] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:40:36.069 03:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be8a47c9-df27-402f-b756-c421872d5c10 00:40:36.069 03:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:40:36.069 03:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be8a47c9-df27-402f-b756-c421872d5c10 00:40:36.069 03:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:36.069 03:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:36.069 03:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:36.069 03:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:36.069 03:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:36.069 03:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:36.069 03:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:36.069 03:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:40:36.069 03:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be8a47c9-df27-402f-b756-c421872d5c10 00:40:36.328 request: 00:40:36.328 { 00:40:36.328 "uuid": "be8a47c9-df27-402f-b756-c421872d5c10", 00:40:36.328 "method": "bdev_lvol_get_lvstores", 00:40:36.328 "req_id": 1 00:40:36.328 } 00:40:36.328 Got JSON-RPC error response 00:40:36.328 response: 00:40:36.328 { 00:40:36.328 "code": -19, 00:40:36.328 "message": "No such device" 00:40:36.328 } 00:40:36.328 03:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:40:36.329 03:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:36.329 03:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:36.329 03:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:36.329 03:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:36.588 aio_bdev 00:40:36.588 03:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 659a0f85-012a-4725-8ae8-9af38612747b 00:40:36.588 03:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=659a0f85-012a-4725-8ae8-9af38612747b 00:40:36.588 03:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:36.588 03:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:40:36.588 03:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:36.588 03:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:36.588 03:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:36.848 03:51:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 659a0f85-012a-4725-8ae8-9af38612747b -t 2000 00:40:36.848 [ 00:40:36.848 { 00:40:36.848 "name": "659a0f85-012a-4725-8ae8-9af38612747b", 00:40:36.848 "aliases": [ 00:40:36.848 "lvs/lvol" 00:40:36.848 ], 00:40:36.848 "product_name": "Logical Volume", 00:40:36.848 "block_size": 4096, 00:40:36.848 "num_blocks": 38912, 00:40:36.848 "uuid": "659a0f85-012a-4725-8ae8-9af38612747b", 00:40:36.848 "assigned_rate_limits": { 00:40:36.848 "rw_ios_per_sec": 0, 00:40:36.848 "rw_mbytes_per_sec": 0, 00:40:36.848 "r_mbytes_per_sec": 0, 00:40:36.848 "w_mbytes_per_sec": 0 00:40:36.848 }, 00:40:36.848 "claimed": false, 00:40:36.848 "zoned": false, 00:40:36.848 "supported_io_types": { 00:40:36.848 "read": true, 00:40:36.848 "write": true, 00:40:36.848 "unmap": true, 00:40:36.848 "flush": false, 00:40:36.848 "reset": true, 00:40:36.848 "nvme_admin": false, 00:40:36.848 "nvme_io": false, 00:40:36.848 "nvme_io_md": false, 00:40:36.848 "write_zeroes": true, 00:40:36.848 "zcopy": false, 00:40:36.848 "get_zone_info": false, 00:40:36.848 "zone_management": false, 00:40:36.848 "zone_append": false, 00:40:36.848 "compare": false, 00:40:36.848 "compare_and_write": false, 00:40:36.848 "abort": false, 00:40:36.848 "seek_hole": true, 00:40:36.848 "seek_data": true, 00:40:36.848 "copy": false, 00:40:36.848 "nvme_iov_md": false 00:40:36.848 }, 00:40:36.848 "driver_specific": { 00:40:36.848 "lvol": { 00:40:36.848 "lvol_store_uuid": "be8a47c9-df27-402f-b756-c421872d5c10", 00:40:36.848 "base_bdev": "aio_bdev", 00:40:36.848 "thin_provision": false, 00:40:36.848 "num_allocated_clusters": 38, 00:40:36.848 "snapshot": false, 00:40:36.848 "clone": false, 00:40:36.848 "esnap_clone": false 00:40:36.848 } 00:40:36.848 } 00:40:36.848 } 00:40:36.848 ] 00:40:36.848 03:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:40:36.848 03:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be8a47c9-df27-402f-b756-c421872d5c10 00:40:36.848 03:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:40:37.108 03:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:40:37.108 03:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be8a47c9-df27-402f-b756-c421872d5c10 00:40:37.108 03:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:40:37.367 03:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:40:37.367 03:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 659a0f85-012a-4725-8ae8-9af38612747b 00:40:37.627 03:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u be8a47c9-df27-402f-b756-c421872d5c10 00:40:37.627 03:51:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:37.887 03:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:37.887 00:40:37.887 real 0m16.975s 00:40:37.887 user 0m16.585s 00:40:37.887 sys 0m1.562s 00:40:37.887 03:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:37.887 03:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:40:37.887 ************************************ 00:40:37.887 END TEST lvs_grow_clean 00:40:37.887 ************************************ 00:40:37.887 03:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:40:37.887 03:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:37.887 03:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:37.887 03:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:37.887 ************************************ 00:40:37.887 START TEST lvs_grow_dirty 00:40:37.887 ************************************ 00:40:37.887 03:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:40:37.887 03:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:40:37.887 03:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:40:37.887 03:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:40:37.887 03:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:40:37.887 03:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:40:37.887 03:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:40:37.887 03:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:38.148 03:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:38.148 03:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:38.148 03:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:40:38.148 03:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:40:38.407 03:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=bf2ea579-a5e1-4548-a68a-ce1be4766feb 00:40:38.408 03:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf2ea579-a5e1-4548-a68a-ce1be4766feb 00:40:38.408 03:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:40:38.667 03:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:40:38.667 03:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:40:38.667 03:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bf2ea579-a5e1-4548-a68a-ce1be4766feb lvol 150 00:40:38.927 03:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=0dd38d66-f006-41d7-988f-21b82b698d0d 00:40:38.927 03:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:38.927 03:51:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:40:38.927 [2024-12-13 03:51:40.084849] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:40:38.927 [2024-12-13 03:51:40.084979] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:40:38.927 true 00:40:38.927 03:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf2ea579-a5e1-4548-a68a-ce1be4766feb 00:40:38.927 03:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:40:39.186 03:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:40:39.186 03:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:39.445 03:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0dd38d66-f006-41d7-988f-21b82b698d0d 00:40:39.705 03:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:39.705 [2024-12-13 03:51:40.833230] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:39.705 03:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:39.965 03:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2956580 00:40:39.965 03:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:39.965 03:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:40:39.965 03:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2956580 /var/tmp/bdevperf.sock 00:40:39.965 03:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2956580 ']' 00:40:39.965 03:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:39.965 03:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:39.965 03:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:39.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:39.965 03:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:39.965 03:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:39.965 [2024-12-13 03:51:41.099684] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:40:39.965 [2024-12-13 03:51:41.099789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2956580 ] 00:40:40.225 [2024-12-13 03:51:41.213319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:40.225 [2024-12-13 03:51:41.322015] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:40.794 03:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:40.794 03:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:40:40.794 03:51:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:40:41.363 Nvme0n1 00:40:41.363 03:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:40:41.363 [ 00:40:41.363 { 00:40:41.363 "name": "Nvme0n1", 00:40:41.363 "aliases": [ 00:40:41.363 "0dd38d66-f006-41d7-988f-21b82b698d0d" 00:40:41.363 ], 00:40:41.363 "product_name": "NVMe disk", 00:40:41.363 "block_size": 4096, 00:40:41.363 "num_blocks": 38912, 00:40:41.363 "uuid": "0dd38d66-f006-41d7-988f-21b82b698d0d", 00:40:41.363 "numa_id": 1, 00:40:41.363 "assigned_rate_limits": { 00:40:41.363 "rw_ios_per_sec": 0, 00:40:41.363 "rw_mbytes_per_sec": 0, 00:40:41.363 "r_mbytes_per_sec": 0, 00:40:41.363 "w_mbytes_per_sec": 0 00:40:41.363 }, 00:40:41.363 "claimed": false, 00:40:41.363 "zoned": false, 00:40:41.363 "supported_io_types": { 00:40:41.363 "read": true, 00:40:41.363 "write": true, 00:40:41.363 "unmap": true, 00:40:41.363 "flush": true, 00:40:41.363 "reset": true, 00:40:41.363 "nvme_admin": true, 00:40:41.363 "nvme_io": true, 00:40:41.363 "nvme_io_md": false, 00:40:41.363 "write_zeroes": true, 00:40:41.363 "zcopy": false, 00:40:41.363 "get_zone_info": false, 00:40:41.363 "zone_management": false, 00:40:41.363 "zone_append": false, 00:40:41.363 "compare": true, 00:40:41.363 "compare_and_write": true, 00:40:41.363 "abort": true, 00:40:41.363 "seek_hole": false, 00:40:41.363 "seek_data": false, 00:40:41.363 "copy": true, 00:40:41.363 "nvme_iov_md": false 00:40:41.363 }, 00:40:41.363 "memory_domains": [ 00:40:41.363 { 00:40:41.363 "dma_device_id": "system", 00:40:41.363 "dma_device_type": 1 00:40:41.363 } 00:40:41.363 ], 00:40:41.363 "driver_specific": { 00:40:41.363 "nvme": [ 00:40:41.363 { 00:40:41.363 "trid": { 00:40:41.363 "trtype": "TCP", 00:40:41.363 "adrfam": "IPv4", 00:40:41.363 "traddr": "10.0.0.2", 00:40:41.363 "trsvcid": "4420", 00:40:41.363 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:40:41.363 }, 00:40:41.363 "ctrlr_data": { 00:40:41.363 "cntlid": 1, 00:40:41.363 "vendor_id": "0x8086", 00:40:41.363 "model_number": "SPDK bdev Controller", 00:40:41.364 "serial_number": "SPDK0", 00:40:41.364 "firmware_revision": "25.01", 00:40:41.364 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:41.364 "oacs": { 00:40:41.364 "security": 0, 00:40:41.364 "format": 0, 00:40:41.364 "firmware": 0, 00:40:41.364 "ns_manage": 0 00:40:41.364 }, 00:40:41.364 "multi_ctrlr": true, 00:40:41.364 "ana_reporting": false 00:40:41.364 }, 00:40:41.364 "vs": { 00:40:41.364 "nvme_version": "1.3" 00:40:41.364 }, 00:40:41.364 "ns_data": { 00:40:41.364 "id": 1, 00:40:41.364 "can_share": true 00:40:41.364 } 00:40:41.364 } 00:40:41.364 ], 00:40:41.364 "mp_policy": "active_passive" 00:40:41.364 } 00:40:41.364 } 00:40:41.364 ] 00:40:41.364 03:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2956835 00:40:41.364 03:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:40:41.364 03:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:41.624 Running I/O for 10 seconds... 00:40:42.564 Latency(us) 00:40:42.564 [2024-12-13T02:51:43.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:42.564 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:42.564 Nvme0n1 : 1.00 20193.00 78.88 0.00 0.00 0.00 0.00 0.00 00:40:42.564 [2024-12-13T02:51:43.773Z] =================================================================================================================== 00:40:42.564 [2024-12-13T02:51:43.773Z] Total : 20193.00 78.88 0.00 0.00 0.00 0.00 0.00 00:40:42.564 00:40:43.503 03:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bf2ea579-a5e1-4548-a68a-ce1be4766feb 00:40:43.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:43.503 Nvme0n1 : 2.00 20383.50 79.62 0.00 0.00 0.00 0.00 0.00 00:40:43.503 [2024-12-13T02:51:44.712Z] =================================================================================================================== 00:40:43.503 [2024-12-13T02:51:44.712Z] Total : 20383.50 79.62 0.00 0.00 0.00 0.00 0.00 00:40:43.503 00:40:43.503 true 00:40:43.763 03:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:40:43.763 03:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf2ea579-a5e1-4548-a68a-ce1be4766feb 00:40:43.763 03:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:40:43.763 03:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:40:43.763 03:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2956835 00:40:44.701 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:44.701 Nvme0n1 : 3.00 20404.67 79.71 0.00 0.00 0.00 0.00 0.00 00:40:44.701 [2024-12-13T02:51:45.910Z] =================================================================================================================== 00:40:44.701 [2024-12-13T02:51:45.910Z] Total : 20404.67 79.71 0.00 0.00 0.00 0.00 0.00 00:40:44.701 00:40:45.639 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:45.639 Nvme0n1 : 4.00 20478.75 80.00 0.00 0.00 0.00 0.00 0.00 00:40:45.639 [2024-12-13T02:51:46.848Z] =================================================================================================================== 00:40:45.639 [2024-12-13T02:51:46.848Z] Total : 20478.75 80.00 0.00 0.00 0.00 0.00 0.00 00:40:45.639 00:40:46.577 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:46.577 Nvme0n1 : 5.00 20523.20 80.17 0.00 0.00 0.00 0.00 0.00 00:40:46.577 [2024-12-13T02:51:47.786Z] =================================================================================================================== 00:40:46.577 [2024-12-13T02:51:47.786Z] Total : 20523.20 80.17 0.00 0.00 0.00 0.00 0.00 00:40:46.577 00:40:47.515 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:47.515 Nvme0n1 : 6.00 20552.83 80.28 0.00 0.00 0.00 0.00 0.00 00:40:47.515 [2024-12-13T02:51:48.724Z] =================================================================================================================== 00:40:47.515 [2024-12-13T02:51:48.724Z] Total : 20552.83 80.28 0.00 0.00 0.00 0.00 0.00 00:40:47.515 00:40:48.455 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:48.455 Nvme0n1 : 7.00 20574.00 80.37 0.00 0.00 0.00 0.00 0.00 00:40:48.455 [2024-12-13T02:51:49.664Z] =================================================================================================================== 00:40:48.455 [2024-12-13T02:51:49.664Z] Total : 20574.00 80.37 0.00 0.00 0.00 0.00 0.00 00:40:48.455 00:40:49.518 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:49.518 Nvme0n1 : 8.00 20542.25 80.24 0.00 0.00 0.00 0.00 0.00 00:40:49.518 [2024-12-13T02:51:50.727Z] =================================================================================================================== 00:40:49.518 [2024-12-13T02:51:50.727Z] Total : 20542.25 80.24 0.00 0.00 0.00 0.00 0.00 00:40:49.518 00:40:50.485 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:50.485 Nvme0n1 : 9.00 20545.78 80.26 0.00 0.00 0.00 0.00 0.00 00:40:50.485 [2024-12-13T02:51:51.694Z] =================================================================================================================== 00:40:50.485 [2024-12-13T02:51:51.694Z] Total : 20545.78 80.26 0.00 0.00 0.00 0.00 0.00 00:40:50.485 00:40:51.426 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:51.426 Nvme0n1 : 10.00 20561.30 80.32 0.00 0.00 0.00 0.00 0.00 00:40:51.426 [2024-12-13T02:51:52.635Z] =================================================================================================================== 00:40:51.426 [2024-12-13T02:51:52.635Z] Total : 20561.30 80.32 0.00 0.00 0.00 0.00 0.00 00:40:51.426 00:40:51.426 00:40:51.426 Latency(us) 00:40:51.426 [2024-12-13T02:51:52.635Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:51.426 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:51.426 Nvme0n1 : 10.00 20565.24 80.33 0.00 0.00 6220.82 5586.16 17601.10 00:40:51.426 [2024-12-13T02:51:52.635Z] =================================================================================================================== 00:40:51.426 [2024-12-13T02:51:52.635Z] Total : 20565.24 80.33 0.00 0.00 6220.82 5586.16 17601.10 00:40:51.426 { 00:40:51.426 "results": [ 00:40:51.426 { 00:40:51.426 "job": "Nvme0n1", 00:40:51.426 "core_mask": "0x2", 00:40:51.426 "workload": "randwrite", 00:40:51.426 "status": "finished", 00:40:51.426 "queue_depth": 128, 00:40:51.426 "io_size": 4096, 00:40:51.426 "runtime": 10.004306, 00:40:51.426 "iops": 20565.244605672797, 00:40:51.426 "mibps": 80.33298674090936, 00:40:51.426 "io_failed": 0, 00:40:51.426 "io_timeout": 0, 00:40:51.426 "avg_latency_us": 6220.818791041256, 00:40:51.426 "min_latency_us": 5586.1638095238095, 00:40:51.426 "max_latency_us": 17601.097142857143 00:40:51.426 } 00:40:51.426 ], 00:40:51.426 "core_count": 1 00:40:51.426 } 00:40:51.686 03:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2956580 00:40:51.686 03:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2956580 ']' 00:40:51.686 03:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2956580 00:40:51.686 03:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:40:51.686 03:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:51.686 03:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2956580 00:40:51.686 03:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:51.686 03:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:51.686 03:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2956580' 00:40:51.686 killing process with pid 2956580 00:40:51.686 03:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2956580 00:40:51.686 Received shutdown signal, test time was about 10.000000 seconds 00:40:51.686 00:40:51.686 Latency(us) 00:40:51.686 [2024-12-13T02:51:52.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:51.686 [2024-12-13T02:51:52.895Z] =================================================================================================================== 00:40:51.686 [2024-12-13T02:51:52.895Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:51.686 03:51:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2956580 00:40:52.624 03:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:52.624 03:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:52.884 03:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf2ea579-a5e1-4548-a68a-ce1be4766feb 00:40:52.884 03:51:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:40:53.144 03:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:40:53.144 03:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:40:53.144 03:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2953339 00:40:53.144 03:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2953339 00:40:53.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2953339 Killed "${NVMF_APP[@]}" "$@" 00:40:53.144 03:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:40:53.144 03:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:40:53.144 03:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:53.144 03:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:53.144 03:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:53.144 03:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2958682 00:40:53.144 03:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2958682 00:40:53.144 03:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:40:53.144 03:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2958682 ']' 00:40:53.144 03:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:53.144 03:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:53.144 03:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:53.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:53.144 03:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:53.144 03:51:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:53.144 [2024-12-13 03:51:54.299528] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:53.144 [2024-12-13 03:51:54.301582] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:40:53.144 [2024-12-13 03:51:54.301649] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:53.404 [2024-12-13 03:51:54.422702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:53.404 [2024-12-13 03:51:54.522478] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:53.404 [2024-12-13 03:51:54.522522] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:53.404 [2024-12-13 03:51:54.522535] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:53.404 [2024-12-13 03:51:54.522543] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:53.404 [2024-12-13 03:51:54.522553] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:53.404 [2024-12-13 03:51:54.523906] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:53.664 [2024-12-13 03:51:54.841558] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:53.664 [2024-12-13 03:51:54.841794] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:53.923 03:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:53.923 03:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:40:53.923 03:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:53.923 03:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:53.923 03:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:54.182 03:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:54.182 03:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:54.182 [2024-12-13 03:51:55.335800] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:40:54.182 [2024-12-13 03:51:55.336021] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:40:54.182 [2024-12-13 03:51:55.336083] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:40:54.182 03:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:40:54.182 03:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 0dd38d66-f006-41d7-988f-21b82b698d0d 00:40:54.182 03:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=0dd38d66-f006-41d7-988f-21b82b698d0d 00:40:54.182 03:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:54.182 03:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:40:54.182 03:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:54.182 03:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:54.182 03:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:54.440 03:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0dd38d66-f006-41d7-988f-21b82b698d0d -t 2000 00:40:54.699 [ 00:40:54.699 { 00:40:54.699 "name": "0dd38d66-f006-41d7-988f-21b82b698d0d", 00:40:54.699 "aliases": [ 00:40:54.699 "lvs/lvol" 00:40:54.699 ], 00:40:54.699 "product_name": "Logical Volume", 00:40:54.699 "block_size": 4096, 00:40:54.699 "num_blocks": 38912, 00:40:54.699 "uuid": "0dd38d66-f006-41d7-988f-21b82b698d0d", 00:40:54.699 "assigned_rate_limits": { 00:40:54.699 "rw_ios_per_sec": 0, 00:40:54.699 "rw_mbytes_per_sec": 0, 00:40:54.699 "r_mbytes_per_sec": 0, 00:40:54.699 "w_mbytes_per_sec": 0 00:40:54.699 }, 00:40:54.699 "claimed": false, 00:40:54.699 "zoned": false, 00:40:54.700 "supported_io_types": { 00:40:54.700 "read": true, 00:40:54.700 "write": true, 00:40:54.700 "unmap": true, 00:40:54.700 "flush": false, 00:40:54.700 "reset": true, 00:40:54.700 "nvme_admin": false, 00:40:54.700 "nvme_io": false, 00:40:54.700 "nvme_io_md": false, 00:40:54.700 "write_zeroes": true, 00:40:54.700 "zcopy": false, 00:40:54.700 "get_zone_info": false, 00:40:54.700 "zone_management": false, 00:40:54.700 "zone_append": false, 00:40:54.700 "compare": false, 00:40:54.700 "compare_and_write": false, 00:40:54.700 "abort": false, 00:40:54.700 "seek_hole": true, 00:40:54.700 "seek_data": true, 00:40:54.700 "copy": false, 00:40:54.700 "nvme_iov_md": false 00:40:54.700 }, 00:40:54.700 "driver_specific": { 00:40:54.700 "lvol": { 00:40:54.700 "lvol_store_uuid": "bf2ea579-a5e1-4548-a68a-ce1be4766feb", 00:40:54.700 "base_bdev": "aio_bdev", 00:40:54.700 "thin_provision": false, 00:40:54.700 "num_allocated_clusters": 38, 00:40:54.700 "snapshot": false, 00:40:54.700 "clone": false, 00:40:54.700 "esnap_clone": false 00:40:54.700 } 00:40:54.700 } 00:40:54.700 } 00:40:54.700 ] 00:40:54.700 03:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:40:54.700 03:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf2ea579-a5e1-4548-a68a-ce1be4766feb 00:40:54.700 03:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:40:54.959 03:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:40:54.959 03:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf2ea579-a5e1-4548-a68a-ce1be4766feb 00:40:54.959 03:51:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:40:54.959 03:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:40:54.959 03:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:55.219 [2024-12-13 03:51:56.300702] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:40:55.219 03:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf2ea579-a5e1-4548-a68a-ce1be4766feb 00:40:55.219 03:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:40:55.219 03:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf2ea579-a5e1-4548-a68a-ce1be4766feb 00:40:55.219 03:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:55.219 03:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:55.219 03:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:55.219 03:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:55.219 03:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:55.219 03:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:55.219 03:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:55.219 03:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:40:55.219 03:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf2ea579-a5e1-4548-a68a-ce1be4766feb 00:40:55.479 request: 00:40:55.479 { 00:40:55.479 "uuid": "bf2ea579-a5e1-4548-a68a-ce1be4766feb", 00:40:55.479 "method": "bdev_lvol_get_lvstores", 00:40:55.479 "req_id": 1 00:40:55.479 } 00:40:55.479 Got JSON-RPC error response 00:40:55.479 response: 00:40:55.479 { 00:40:55.479 "code": -19, 00:40:55.479 "message": "No such device" 00:40:55.479 } 00:40:55.479 03:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:40:55.479 03:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:55.479 03:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:55.479 03:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:55.479 03:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:55.739 aio_bdev 00:40:55.739 03:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0dd38d66-f006-41d7-988f-21b82b698d0d 00:40:55.739 03:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=0dd38d66-f006-41d7-988f-21b82b698d0d 00:40:55.739 03:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:55.739 03:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:40:55.739 03:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:55.739 03:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:55.739 03:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:55.739 03:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0dd38d66-f006-41d7-988f-21b82b698d0d -t 2000 00:40:55.998 [ 00:40:55.998 { 00:40:55.998 "name": "0dd38d66-f006-41d7-988f-21b82b698d0d", 00:40:55.998 "aliases": [ 00:40:55.998 "lvs/lvol" 00:40:55.998 ], 00:40:55.998 "product_name": "Logical Volume", 00:40:55.998 "block_size": 4096, 00:40:55.998 "num_blocks": 38912, 00:40:55.998 "uuid": "0dd38d66-f006-41d7-988f-21b82b698d0d", 00:40:55.998 "assigned_rate_limits": { 00:40:55.998 "rw_ios_per_sec": 0, 00:40:55.998 "rw_mbytes_per_sec": 0, 00:40:55.998 "r_mbytes_per_sec": 0, 00:40:55.998 "w_mbytes_per_sec": 0 00:40:55.998 }, 00:40:55.998 "claimed": false, 00:40:55.998 "zoned": false, 00:40:55.998 "supported_io_types": { 00:40:55.998 "read": true, 00:40:55.998 "write": true, 00:40:55.998 "unmap": true, 00:40:55.998 "flush": false, 00:40:55.998 "reset": true, 00:40:55.998 "nvme_admin": false, 00:40:55.998 "nvme_io": false, 00:40:55.998 "nvme_io_md": false, 00:40:55.998 "write_zeroes": true, 00:40:55.998 "zcopy": false, 00:40:55.998 "get_zone_info": false, 00:40:55.998 "zone_management": false, 00:40:55.998 "zone_append": false, 00:40:55.998 "compare": false, 00:40:55.998 "compare_and_write": false, 00:40:55.998 "abort": false, 00:40:55.998 "seek_hole": true, 00:40:55.998 "seek_data": true, 00:40:55.998 "copy": false, 00:40:55.998 "nvme_iov_md": false 00:40:55.998 }, 00:40:55.999 "driver_specific": { 00:40:55.999 "lvol": { 00:40:55.999 "lvol_store_uuid": "bf2ea579-a5e1-4548-a68a-ce1be4766feb", 00:40:55.999 "base_bdev": "aio_bdev", 00:40:55.999 "thin_provision": false, 00:40:55.999 "num_allocated_clusters": 38, 00:40:55.999 "snapshot": false, 00:40:55.999 "clone": false, 00:40:55.999 "esnap_clone": false 00:40:55.999 } 00:40:55.999 } 00:40:55.999 } 00:40:55.999 ] 00:40:55.999 03:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:40:55.999 03:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf2ea579-a5e1-4548-a68a-ce1be4766feb 00:40:55.999 03:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:40:56.258 03:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:40:56.258 03:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf2ea579-a5e1-4548-a68a-ce1be4766feb 00:40:56.258 03:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:40:56.518 03:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:40:56.518 03:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0dd38d66-f006-41d7-988f-21b82b698d0d 00:40:56.518 03:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bf2ea579-a5e1-4548-a68a-ce1be4766feb 00:40:56.777 03:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:57.037 03:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:57.037 00:40:57.037 real 0m19.035s 00:40:57.037 user 0m36.437s 00:40:57.037 sys 0m3.923s 00:40:57.037 03:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:57.037 03:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:57.037 ************************************ 00:40:57.037 END TEST lvs_grow_dirty 00:40:57.037 ************************************ 00:40:57.037 03:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:40:57.037 03:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:40:57.037 03:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:40:57.037 03:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:40:57.037 03:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:40:57.037 03:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:40:57.037 03:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:40:57.037 03:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:40:57.037 03:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:40:57.037 nvmf_trace.0 00:40:57.037 03:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:40:57.037 03:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:40:57.037 03:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:57.037 03:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:40:57.037 03:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:57.037 03:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:40:57.037 03:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:57.037 03:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:57.037 rmmod nvme_tcp 00:40:57.037 rmmod nvme_fabrics 00:40:57.295 rmmod nvme_keyring 00:40:57.296 03:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:57.296 03:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:40:57.296 03:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:40:57.296 03:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2958682 ']' 00:40:57.296 03:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2958682 00:40:57.296 03:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2958682 ']' 00:40:57.296 03:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2958682 00:40:57.296 03:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:40:57.296 03:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:57.296 03:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2958682 00:40:57.296 03:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:57.296 03:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:57.296 03:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2958682' 00:40:57.296 killing process with pid 2958682 00:40:57.296 03:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2958682 00:40:57.296 03:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2958682 00:40:58.232 03:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:58.232 03:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:58.232 03:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:58.232 03:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:40:58.232 03:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:40:58.232 03:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:58.232 03:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:40:58.232 03:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:58.232 03:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:58.232 03:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:58.232 03:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:58.232 03:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:00.772 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:00.772 00:41:00.772 real 0m46.225s 00:41:00.772 user 0m56.826s 00:41:00.772 sys 0m10.051s 00:41:00.772 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:00.772 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:00.772 ************************************ 00:41:00.772 END TEST nvmf_lvs_grow 00:41:00.772 ************************************ 00:41:00.772 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:41:00.772 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:00.772 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:00.772 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:00.772 ************************************ 00:41:00.772 START TEST nvmf_bdev_io_wait 00:41:00.772 ************************************ 00:41:00.772 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:41:00.772 * Looking for test storage... 00:41:00.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:00.772 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:00.772 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:41:00.772 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:00.772 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:00.772 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:00.772 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:00.772 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:00.772 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:41:00.772 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:41:00.772 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:41:00.772 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:41:00.772 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:41:00.772 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:41:00.772 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:41:00.772 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:00.772 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:00.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:00.773 --rc genhtml_branch_coverage=1 00:41:00.773 --rc genhtml_function_coverage=1 00:41:00.773 --rc genhtml_legend=1 00:41:00.773 --rc geninfo_all_blocks=1 00:41:00.773 --rc geninfo_unexecuted_blocks=1 00:41:00.773 00:41:00.773 ' 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:00.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:00.773 --rc genhtml_branch_coverage=1 00:41:00.773 --rc genhtml_function_coverage=1 00:41:00.773 --rc genhtml_legend=1 00:41:00.773 --rc geninfo_all_blocks=1 00:41:00.773 --rc geninfo_unexecuted_blocks=1 00:41:00.773 00:41:00.773 ' 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:00.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:00.773 --rc genhtml_branch_coverage=1 00:41:00.773 --rc genhtml_function_coverage=1 00:41:00.773 --rc genhtml_legend=1 00:41:00.773 --rc geninfo_all_blocks=1 00:41:00.773 --rc geninfo_unexecuted_blocks=1 00:41:00.773 00:41:00.773 ' 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:00.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:00.773 --rc genhtml_branch_coverage=1 00:41:00.773 --rc genhtml_function_coverage=1 00:41:00.773 --rc genhtml_legend=1 00:41:00.773 --rc geninfo_all_blocks=1 00:41:00.773 --rc geninfo_unexecuted_blocks=1 00:41:00.773 00:41:00.773 ' 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:00.773 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:00.774 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:41:00.774 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:00.774 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:00.774 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:00.774 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:00.774 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:00.774 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:00.774 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:00.774 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:00.774 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:00.774 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:00.774 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:41:00.774 03:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:06.057 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:06.057 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:41:06.057 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:06.057 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:06.057 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:06.057 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:06.057 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:06.057 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:41:06.057 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:06.057 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:41:06.057 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:41:06.057 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:41:06.057 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:41:06.057 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:41:06.057 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:41:06.057 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:06.057 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:06.057 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:06.057 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:06.057 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:06.057 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:06.057 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:06.057 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:06.057 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:06.057 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:06.057 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:06.057 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:06.058 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:06.058 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:06.058 Found net devices under 0000:af:00.0: cvl_0_0 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:06.058 Found net devices under 0000:af:00.1: cvl_0_1 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:06.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:06.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:41:06.058 00:41:06.058 --- 10.0.0.2 ping statistics --- 00:41:06.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:06.058 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:06.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:06.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:41:06.058 00:41:06.058 --- 10.0.0.1 ping statistics --- 00:41:06.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:06.058 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:06.058 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:06.059 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:06.059 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:06.059 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:06.059 03:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:06.059 03:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:41:06.059 03:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:06.059 03:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:06.059 03:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:06.059 03:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2962826 00:41:06.059 03:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:41:06.059 03:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2962826 00:41:06.059 03:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2962826 ']' 00:41:06.059 03:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:06.059 03:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:06.059 03:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:06.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:06.059 03:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:06.059 03:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:06.059 [2024-12-13 03:52:07.110343] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:06.059 [2024-12-13 03:52:07.112448] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:41:06.059 [2024-12-13 03:52:07.112519] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:06.059 [2024-12-13 03:52:07.230582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:06.319 [2024-12-13 03:52:07.338430] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:06.319 [2024-12-13 03:52:07.338473] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:06.319 [2024-12-13 03:52:07.338485] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:06.319 [2024-12-13 03:52:07.338494] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:06.319 [2024-12-13 03:52:07.338503] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:06.319 [2024-12-13 03:52:07.340771] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:06.319 [2024-12-13 03:52:07.340847] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:41:06.319 [2024-12-13 03:52:07.340906] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:06.319 [2024-12-13 03:52:07.340933] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:41:06.319 [2024-12-13 03:52:07.341348] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:06.888 03:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:06.888 03:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:41:06.888 03:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:06.888 03:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:06.888 03:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:06.888 03:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:06.888 03:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:41:06.888 03:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:06.888 03:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:06.888 03:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:06.888 03:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:41:06.888 03:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:06.888 03:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:07.148 [2024-12-13 03:52:08.170617] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:07.148 [2024-12-13 03:52:08.171564] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:07.148 [2024-12-13 03:52:08.172731] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:07.148 [2024-12-13 03:52:08.173609] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:07.148 [2024-12-13 03:52:08.185549] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:07.148 Malloc0 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:07.148 [2024-12-13 03:52:08.317854] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2963010 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2963013 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:07.148 { 00:41:07.148 "params": { 00:41:07.148 "name": "Nvme$subsystem", 00:41:07.148 "trtype": "$TEST_TRANSPORT", 00:41:07.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:07.148 "adrfam": "ipv4", 00:41:07.148 "trsvcid": "$NVMF_PORT", 00:41:07.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:07.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:07.148 "hdgst": ${hdgst:-false}, 00:41:07.148 "ddgst": ${ddgst:-false} 00:41:07.148 }, 00:41:07.148 "method": "bdev_nvme_attach_controller" 00:41:07.148 } 00:41:07.148 EOF 00:41:07.148 )") 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2963015 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:07.148 { 00:41:07.148 "params": { 00:41:07.148 "name": "Nvme$subsystem", 00:41:07.148 "trtype": "$TEST_TRANSPORT", 00:41:07.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:07.148 "adrfam": "ipv4", 00:41:07.148 "trsvcid": "$NVMF_PORT", 00:41:07.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:07.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:07.148 "hdgst": ${hdgst:-false}, 00:41:07.148 "ddgst": ${ddgst:-false} 00:41:07.148 }, 00:41:07.148 "method": "bdev_nvme_attach_controller" 00:41:07.148 } 00:41:07.148 EOF 00:41:07.148 )") 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2963019 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:41:07.148 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:07.149 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:41:07.149 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:41:07.149 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:07.149 { 00:41:07.149 "params": { 00:41:07.149 "name": "Nvme$subsystem", 00:41:07.149 "trtype": "$TEST_TRANSPORT", 00:41:07.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:07.149 "adrfam": "ipv4", 00:41:07.149 "trsvcid": "$NVMF_PORT", 00:41:07.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:07.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:07.149 "hdgst": ${hdgst:-false}, 00:41:07.149 "ddgst": ${ddgst:-false} 00:41:07.149 }, 00:41:07.149 "method": "bdev_nvme_attach_controller" 00:41:07.149 } 00:41:07.149 EOF 00:41:07.149 )") 00:41:07.149 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:41:07.149 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:41:07.149 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:41:07.149 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:07.149 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:07.149 { 00:41:07.149 "params": { 00:41:07.149 "name": "Nvme$subsystem", 00:41:07.149 "trtype": "$TEST_TRANSPORT", 00:41:07.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:07.149 "adrfam": "ipv4", 00:41:07.149 "trsvcid": "$NVMF_PORT", 00:41:07.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:07.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:07.149 "hdgst": ${hdgst:-false}, 00:41:07.149 "ddgst": ${ddgst:-false} 00:41:07.149 }, 00:41:07.149 "method": "bdev_nvme_attach_controller" 00:41:07.149 } 00:41:07.149 EOF 00:41:07.149 )") 00:41:07.149 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:41:07.149 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2963010 00:41:07.149 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:41:07.149 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:41:07.149 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:41:07.149 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:41:07.149 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:41:07.149 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:07.149 "params": { 00:41:07.149 "name": "Nvme1", 00:41:07.149 "trtype": "tcp", 00:41:07.149 "traddr": "10.0.0.2", 00:41:07.149 "adrfam": "ipv4", 00:41:07.149 "trsvcid": "4420", 00:41:07.149 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:07.149 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:07.149 "hdgst": false, 00:41:07.149 "ddgst": false 00:41:07.149 }, 00:41:07.149 "method": "bdev_nvme_attach_controller" 00:41:07.149 }' 00:41:07.149 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:41:07.149 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:41:07.149 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:07.149 "params": { 00:41:07.149 "name": "Nvme1", 00:41:07.149 "trtype": "tcp", 00:41:07.149 "traddr": "10.0.0.2", 00:41:07.149 "adrfam": "ipv4", 00:41:07.149 "trsvcid": "4420", 00:41:07.149 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:07.149 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:07.149 "hdgst": false, 00:41:07.149 "ddgst": false 00:41:07.149 }, 00:41:07.149 "method": "bdev_nvme_attach_controller" 00:41:07.149 }' 00:41:07.149 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:41:07.149 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:07.149 "params": { 00:41:07.149 "name": "Nvme1", 00:41:07.149 "trtype": "tcp", 00:41:07.149 "traddr": "10.0.0.2", 00:41:07.149 "adrfam": "ipv4", 00:41:07.149 "trsvcid": "4420", 00:41:07.149 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:07.149 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:07.149 "hdgst": false, 00:41:07.149 "ddgst": false 00:41:07.149 }, 00:41:07.149 "method": "bdev_nvme_attach_controller" 00:41:07.149 }' 00:41:07.149 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:41:07.149 03:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:07.149 "params": { 00:41:07.149 "name": "Nvme1", 00:41:07.149 "trtype": "tcp", 00:41:07.149 "traddr": "10.0.0.2", 00:41:07.149 "adrfam": "ipv4", 00:41:07.149 "trsvcid": "4420", 00:41:07.149 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:07.149 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:07.149 "hdgst": false, 00:41:07.149 "ddgst": false 00:41:07.149 }, 00:41:07.149 "method": "bdev_nvme_attach_controller" 00:41:07.149 }' 00:41:07.408 [2024-12-13 03:52:08.395862] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:41:07.408 [2024-12-13 03:52:08.395961] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:41:07.408 [2024-12-13 03:52:08.398567] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:41:07.408 [2024-12-13 03:52:08.398604] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:41:07.408 [2024-12-13 03:52:08.398640] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:41:07.408 [2024-12-13 03:52:08.398678] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:41:07.408 [2024-12-13 03:52:08.398861] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:41:07.408 [2024-12-13 03:52:08.398964] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:41:07.667 [2024-12-13 03:52:08.627225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:07.667 [2024-12-13 03:52:08.723904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:07.667 [2024-12-13 03:52:08.735935] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:41:07.667 [2024-12-13 03:52:08.821106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:07.667 [2024-12-13 03:52:08.837344] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:41:07.926 [2024-12-13 03:52:08.921581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:07.926 [2024-12-13 03:52:08.925338] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:41:07.926 [2024-12-13 03:52:09.038047] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:41:08.185 Running I/O for 1 seconds... 00:41:08.185 Running I/O for 1 seconds... 00:41:08.444 Running I/O for 1 seconds... 00:41:08.444 Running I/O for 1 seconds... 00:41:09.384 11213.00 IOPS, 43.80 MiB/s 00:41:09.384 Latency(us) 00:41:09.384 [2024-12-13T02:52:10.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:09.384 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:41:09.384 Nvme1n1 : 1.01 11272.87 44.03 0.00 0.00 11314.94 4431.48 15104.49 00:41:09.384 [2024-12-13T02:52:10.593Z] =================================================================================================================== 00:41:09.384 [2024-12-13T02:52:10.593Z] Total : 11272.87 44.03 0.00 0.00 11314.94 4431.48 15104.49 00:41:09.384 213880.00 IOPS, 835.47 MiB/s 00:41:09.384 Latency(us) 00:41:09.384 [2024-12-13T02:52:10.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:09.384 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:41:09.384 Nvme1n1 : 1.00 213521.38 834.07 0.00 0.00 596.53 273.07 1646.20 00:41:09.384 [2024-12-13T02:52:10.593Z] =================================================================================================================== 00:41:09.384 [2024-12-13T02:52:10.593Z] Total : 213521.38 834.07 0.00 0.00 596.53 273.07 1646.20 00:41:09.384 10169.00 IOPS, 39.72 MiB/s 00:41:09.384 Latency(us) 00:41:09.384 [2024-12-13T02:52:10.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:09.384 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:41:09.384 Nvme1n1 : 1.01 10247.16 40.03 0.00 0.00 12451.19 2402.99 17351.44 00:41:09.384 [2024-12-13T02:52:10.593Z] =================================================================================================================== 00:41:09.384 [2024-12-13T02:52:10.593Z] Total : 10247.16 40.03 0.00 0.00 12451.19 2402.99 17351.44 00:41:09.384 9635.00 IOPS, 37.64 MiB/s 00:41:09.384 Latency(us) 00:41:09.384 [2024-12-13T02:52:10.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:09.384 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:41:09.384 Nvme1n1 : 1.01 9707.93 37.92 0.00 0.00 13143.95 4649.94 20097.71 00:41:09.384 [2024-12-13T02:52:10.593Z] =================================================================================================================== 00:41:09.384 [2024-12-13T02:52:10.593Z] Total : 9707.93 37.92 0.00 0.00 13143.95 4649.94 20097.71 00:41:09.953 03:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2963013 00:41:10.213 03:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2963015 00:41:10.213 03:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2963019 00:41:10.213 03:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:10.213 03:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:10.213 03:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:10.213 03:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:10.213 03:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:41:10.213 03:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:41:10.213 03:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:10.213 03:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:41:10.213 03:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:10.213 03:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:41:10.213 03:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:10.213 03:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:10.213 rmmod nvme_tcp 00:41:10.213 rmmod nvme_fabrics 00:41:10.213 rmmod nvme_keyring 00:41:10.213 03:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:10.213 03:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:41:10.213 03:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:41:10.213 03:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2962826 ']' 00:41:10.213 03:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2962826 00:41:10.213 03:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2962826 ']' 00:41:10.213 03:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2962826 00:41:10.213 03:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:41:10.213 03:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:10.213 03:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2962826 00:41:10.213 03:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:10.213 03:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:10.213 03:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2962826' 00:41:10.213 killing process with pid 2962826 00:41:10.213 03:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2962826 00:41:10.213 03:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2962826 00:41:11.592 03:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:11.592 03:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:11.592 03:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:11.592 03:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:41:11.592 03:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:41:11.592 03:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:11.592 03:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:41:11.592 03:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:11.592 03:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:11.592 03:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:11.592 03:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:11.592 03:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:13.498 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:13.498 00:41:13.498 real 0m12.886s 00:41:13.498 user 0m23.607s 00:41:13.498 sys 0m6.777s 00:41:13.498 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:13.498 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:13.498 ************************************ 00:41:13.498 END TEST nvmf_bdev_io_wait 00:41:13.498 ************************************ 00:41:13.498 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:41:13.498 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:13.498 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:13.498 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:13.498 ************************************ 00:41:13.498 START TEST nvmf_queue_depth 00:41:13.498 ************************************ 00:41:13.498 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:41:13.498 * Looking for test storage... 00:41:13.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:13.498 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:13.498 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:41:13.498 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:13.498 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:13.498 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:13.498 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:13.498 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:13.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:13.499 --rc genhtml_branch_coverage=1 00:41:13.499 --rc genhtml_function_coverage=1 00:41:13.499 --rc genhtml_legend=1 00:41:13.499 --rc geninfo_all_blocks=1 00:41:13.499 --rc geninfo_unexecuted_blocks=1 00:41:13.499 00:41:13.499 ' 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:13.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:13.499 --rc genhtml_branch_coverage=1 00:41:13.499 --rc genhtml_function_coverage=1 00:41:13.499 --rc genhtml_legend=1 00:41:13.499 --rc geninfo_all_blocks=1 00:41:13.499 --rc geninfo_unexecuted_blocks=1 00:41:13.499 00:41:13.499 ' 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:13.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:13.499 --rc genhtml_branch_coverage=1 00:41:13.499 --rc genhtml_function_coverage=1 00:41:13.499 --rc genhtml_legend=1 00:41:13.499 --rc geninfo_all_blocks=1 00:41:13.499 --rc geninfo_unexecuted_blocks=1 00:41:13.499 00:41:13.499 ' 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:13.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:13.499 --rc genhtml_branch_coverage=1 00:41:13.499 --rc genhtml_function_coverage=1 00:41:13.499 --rc genhtml_legend=1 00:41:13.499 --rc geninfo_all_blocks=1 00:41:13.499 --rc geninfo_unexecuted_blocks=1 00:41:13.499 00:41:13.499 ' 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:13.499 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:41:13.759 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:13.759 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:13.759 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:13.759 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:13.759 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:13.759 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:13.759 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:41:13.759 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:13.759 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:41:13.759 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:13.759 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:13.759 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:13.759 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:13.759 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:13.759 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:13.759 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:13.759 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:13.759 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:13.759 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:13.759 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:41:13.759 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:41:13.759 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:41:13.759 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:41:13.759 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:13.759 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:13.759 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:13.759 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:13.759 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:13.759 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:13.759 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:13.759 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:13.759 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:13.760 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:13.760 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:41:13.760 03:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:19.036 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:19.036 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:19.036 Found net devices under 0000:af:00.0: cvl_0_0 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:19.036 Found net devices under 0000:af:00.1: cvl_0_1 00:41:19.036 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:19.037 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:19.037 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:41:19.037 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:19.037 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:19.037 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:19.037 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:19.037 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:19.037 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:19.037 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:19.037 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:19.037 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:19.037 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:19.037 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:19.037 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:19.037 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:19.037 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:19.037 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:19.037 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:19.037 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:19.037 03:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:19.037 03:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:19.037 03:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:19.037 03:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:19.037 03:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:19.037 03:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:19.037 03:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:19.037 03:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:19.037 03:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:19.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:19.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:41:19.037 00:41:19.037 --- 10.0.0.2 ping statistics --- 00:41:19.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:19.037 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:41:19.037 03:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:19.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:19.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:41:19.037 00:41:19.037 --- 10.0.0.1 ping statistics --- 00:41:19.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:19.037 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:41:19.037 03:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:19.037 03:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:41:19.037 03:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:19.037 03:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:19.037 03:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:19.037 03:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:19.037 03:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:19.037 03:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:19.037 03:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:19.037 03:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:41:19.037 03:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:19.037 03:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:19.037 03:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:19.037 03:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2967074 00:41:19.037 03:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2967074 00:41:19.037 03:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2967074 ']' 00:41:19.037 03:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:19.037 03:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:19.037 03:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:19.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:19.037 03:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:19.037 03:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:19.037 03:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:41:19.296 [2024-12-13 03:52:20.319683] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:19.296 [2024-12-13 03:52:20.321784] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:41:19.296 [2024-12-13 03:52:20.321853] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:19.296 [2024-12-13 03:52:20.441456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:19.555 [2024-12-13 03:52:20.548221] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:19.555 [2024-12-13 03:52:20.548263] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:19.555 [2024-12-13 03:52:20.548276] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:19.555 [2024-12-13 03:52:20.548285] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:19.555 [2024-12-13 03:52:20.548295] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:19.555 [2024-12-13 03:52:20.549495] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:19.814 [2024-12-13 03:52:20.876285] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:19.814 [2024-12-13 03:52:20.876536] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:20.076 03:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:20.076 03:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:41:20.076 03:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:20.076 03:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:20.076 03:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:20.076 03:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:20.076 03:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:20.076 03:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.076 03:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:20.076 [2024-12-13 03:52:21.154497] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:20.077 03:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.077 03:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:20.077 03:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.077 03:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:20.077 Malloc0 00:41:20.077 03:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.077 03:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:20.077 03:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.077 03:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:20.077 03:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.077 03:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:20.077 03:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.077 03:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:20.077 03:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.077 03:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:20.077 03:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.077 03:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:20.077 [2024-12-13 03:52:21.266395] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:20.077 03:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.077 03:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2967181 00:41:20.077 03:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:20.077 03:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2967181 /var/tmp/bdevperf.sock 00:41:20.077 03:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2967181 ']' 00:41:20.077 03:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:20.077 03:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:20.077 03:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:20.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:20.077 03:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:20.077 03:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:20.077 03:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:41:20.336 [2024-12-13 03:52:21.344100] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:41:20.336 [2024-12-13 03:52:21.344190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2967181 ] 00:41:20.336 [2024-12-13 03:52:21.458404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:20.595 [2024-12-13 03:52:21.563133] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:21.162 03:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:21.163 03:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:41:21.163 03:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:41:21.163 03:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.163 03:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:21.422 NVMe0n1 00:41:21.422 03:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.422 03:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:41:21.422 Running I/O for 10 seconds... 00:41:23.296 10240.00 IOPS, 40.00 MiB/s [2024-12-13T02:52:25.881Z] 10572.00 IOPS, 41.30 MiB/s [2024-12-13T02:52:26.818Z] 10585.33 IOPS, 41.35 MiB/s [2024-12-13T02:52:27.755Z] 10744.25 IOPS, 41.97 MiB/s [2024-12-13T02:52:28.692Z] 10742.00 IOPS, 41.96 MiB/s [2024-12-13T02:52:29.630Z] 10748.67 IOPS, 41.99 MiB/s [2024-12-13T02:52:30.566Z] 10760.00 IOPS, 42.03 MiB/s [2024-12-13T02:52:31.502Z] 10748.12 IOPS, 41.98 MiB/s [2024-12-13T02:52:32.880Z] 10762.22 IOPS, 42.04 MiB/s [2024-12-13T02:52:32.880Z] 10757.60 IOPS, 42.02 MiB/s 00:41:31.671 Latency(us) 00:41:31.671 [2024-12-13T02:52:32.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:31.671 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:41:31.671 Verification LBA range: start 0x0 length 0x4000 00:41:31.671 NVMe0n1 : 10.05 10798.17 42.18 0.00 0.00 94499.09 12982.37 58919.98 00:41:31.671 [2024-12-13T02:52:32.880Z] =================================================================================================================== 00:41:31.671 [2024-12-13T02:52:32.880Z] Total : 10798.17 42.18 0.00 0.00 94499.09 12982.37 58919.98 00:41:31.671 { 00:41:31.671 "results": [ 00:41:31.671 { 00:41:31.671 "job": "NVMe0n1", 00:41:31.671 "core_mask": "0x1", 00:41:31.671 "workload": "verify", 00:41:31.671 "status": "finished", 00:41:31.671 "verify_range": { 00:41:31.671 "start": 0, 00:41:31.671 "length": 16384 00:41:31.671 }, 00:41:31.671 "queue_depth": 1024, 00:41:31.671 "io_size": 4096, 00:41:31.671 "runtime": 10.053651, 00:41:31.671 "iops": 10798.166755539853, 00:41:31.671 "mibps": 42.18033888882755, 00:41:31.671 "io_failed": 0, 00:41:31.671 "io_timeout": 0, 00:41:31.671 "avg_latency_us": 94499.08892940155, 00:41:31.671 "min_latency_us": 12982.369523809524, 00:41:31.671 "max_latency_us": 58919.984761904765 00:41:31.671 } 00:41:31.671 ], 00:41:31.671 "core_count": 1 00:41:31.671 } 00:41:31.671 03:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2967181 00:41:31.672 03:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2967181 ']' 00:41:31.672 03:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2967181 00:41:31.672 03:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:41:31.672 03:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:31.672 03:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2967181 00:41:31.672 03:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:31.672 03:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:31.672 03:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2967181' 00:41:31.672 killing process with pid 2967181 00:41:31.672 03:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2967181 00:41:31.672 Received shutdown signal, test time was about 10.000000 seconds 00:41:31.672 00:41:31.672 Latency(us) 00:41:31.672 [2024-12-13T02:52:32.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:31.672 [2024-12-13T02:52:32.881Z] =================================================================================================================== 00:41:31.672 [2024-12-13T02:52:32.881Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:31.672 03:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2967181 00:41:32.608 03:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:41:32.608 03:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:41:32.609 03:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:32.609 03:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:41:32.609 03:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:32.609 03:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:41:32.609 03:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:32.609 03:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:32.609 rmmod nvme_tcp 00:41:32.609 rmmod nvme_fabrics 00:41:32.609 rmmod nvme_keyring 00:41:32.609 03:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:32.609 03:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:41:32.609 03:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:41:32.609 03:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2967074 ']' 00:41:32.609 03:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2967074 00:41:32.609 03:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2967074 ']' 00:41:32.609 03:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2967074 00:41:32.609 03:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:41:32.609 03:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:32.609 03:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2967074 00:41:32.609 03:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:32.609 03:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:32.609 03:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2967074' 00:41:32.609 killing process with pid 2967074 00:41:32.609 03:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2967074 00:41:32.609 03:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2967074 00:41:33.988 03:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:33.988 03:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:33.988 03:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:33.988 03:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:41:33.988 03:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:41:33.988 03:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:33.988 03:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:41:33.988 03:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:33.988 03:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:33.988 03:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:33.988 03:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:33.988 03:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:35.897 03:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:35.897 00:41:35.897 real 0m22.434s 00:41:35.897 user 0m26.878s 00:41:35.897 sys 0m6.309s 00:41:35.897 03:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:35.897 03:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:35.897 ************************************ 00:41:35.897 END TEST nvmf_queue_depth 00:41:35.897 ************************************ 00:41:35.897 03:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:41:35.897 03:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:35.897 03:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:35.897 03:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:35.897 ************************************ 00:41:35.897 START TEST nvmf_target_multipath 00:41:35.897 ************************************ 00:41:35.897 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:41:35.897 * Looking for test storage... 00:41:35.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:35.897 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:35.897 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:41:35.897 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:36.157 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:36.157 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:36.157 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:36.157 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:36.157 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:41:36.157 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:41:36.157 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:41:36.157 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:41:36.157 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:41:36.157 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:41:36.157 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:41:36.157 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:36.157 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:41:36.157 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:41:36.157 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:36.157 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:36.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.158 --rc genhtml_branch_coverage=1 00:41:36.158 --rc genhtml_function_coverage=1 00:41:36.158 --rc genhtml_legend=1 00:41:36.158 --rc geninfo_all_blocks=1 00:41:36.158 --rc geninfo_unexecuted_blocks=1 00:41:36.158 00:41:36.158 ' 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:36.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.158 --rc genhtml_branch_coverage=1 00:41:36.158 --rc genhtml_function_coverage=1 00:41:36.158 --rc genhtml_legend=1 00:41:36.158 --rc geninfo_all_blocks=1 00:41:36.158 --rc geninfo_unexecuted_blocks=1 00:41:36.158 00:41:36.158 ' 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:36.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.158 --rc genhtml_branch_coverage=1 00:41:36.158 --rc genhtml_function_coverage=1 00:41:36.158 --rc genhtml_legend=1 00:41:36.158 --rc geninfo_all_blocks=1 00:41:36.158 --rc geninfo_unexecuted_blocks=1 00:41:36.158 00:41:36.158 ' 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:36.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.158 --rc genhtml_branch_coverage=1 00:41:36.158 --rc genhtml_function_coverage=1 00:41:36.158 --rc genhtml_legend=1 00:41:36.158 --rc geninfo_all_blocks=1 00:41:36.158 --rc geninfo_unexecuted_blocks=1 00:41:36.158 00:41:36.158 ' 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:36.158 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:36.159 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:36.159 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:41:36.159 03:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:41:41.540 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:41.540 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:41:41.540 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:41.540 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:41.540 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:41.540 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:41.540 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:41.540 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:41:41.540 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:41.540 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:41:41.540 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:41:41.540 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:41:41.540 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:41:41.540 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:41:41.540 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:41:41.540 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:41.540 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:41.540 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:41.540 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:41.540 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:41.540 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:41.540 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:41.540 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:41.540 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:41.541 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:41.541 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:41.541 Found net devices under 0000:af:00.0: cvl_0_0 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:41.541 Found net devices under 0000:af:00.1: cvl_0_1 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:41.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:41.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:41:41.541 00:41:41.541 --- 10.0.0.2 ping statistics --- 00:41:41.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:41.541 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:41.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:41.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:41:41.541 00:41:41.541 --- 10.0.0.1 ping statistics --- 00:41:41.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:41.541 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:41:41.541 only one NIC for nvmf test 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:41.541 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:41:41.542 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:41.542 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:41.542 rmmod nvme_tcp 00:41:41.542 rmmod nvme_fabrics 00:41:41.542 rmmod nvme_keyring 00:41:41.542 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:41.542 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:41:41.542 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:41:41.542 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:41:41.542 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:41.542 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:41.542 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:41.542 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:41:41.542 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:41:41.542 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:41.542 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:41:41.542 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:41.542 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:41.542 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:41.542 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:41.542 03:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:43.447 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:43.447 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:41:43.447 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:41:43.447 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:43.447 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:41:43.447 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:43.448 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:41:43.448 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:43.448 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:43.448 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:43.448 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:41:43.448 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:41:43.448 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:41:43.448 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:43.448 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:43.448 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:43.448 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:41:43.448 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:41:43.448 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:43.448 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:41:43.707 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:43.707 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:43.707 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:43.707 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:43.707 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:43.707 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:43.707 00:41:43.707 real 0m7.657s 00:41:43.707 user 0m1.584s 00:41:43.707 sys 0m3.995s 00:41:43.707 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:43.707 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:41:43.707 ************************************ 00:41:43.707 END TEST nvmf_target_multipath 00:41:43.707 ************************************ 00:41:43.707 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:41:43.707 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:43.707 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:43.707 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:43.707 ************************************ 00:41:43.707 START TEST nvmf_zcopy 00:41:43.708 ************************************ 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:41:43.708 * Looking for test storage... 00:41:43.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:43.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:43.708 --rc genhtml_branch_coverage=1 00:41:43.708 --rc genhtml_function_coverage=1 00:41:43.708 --rc genhtml_legend=1 00:41:43.708 --rc geninfo_all_blocks=1 00:41:43.708 --rc geninfo_unexecuted_blocks=1 00:41:43.708 00:41:43.708 ' 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:43.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:43.708 --rc genhtml_branch_coverage=1 00:41:43.708 --rc genhtml_function_coverage=1 00:41:43.708 --rc genhtml_legend=1 00:41:43.708 --rc geninfo_all_blocks=1 00:41:43.708 --rc geninfo_unexecuted_blocks=1 00:41:43.708 00:41:43.708 ' 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:43.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:43.708 --rc genhtml_branch_coverage=1 00:41:43.708 --rc genhtml_function_coverage=1 00:41:43.708 --rc genhtml_legend=1 00:41:43.708 --rc geninfo_all_blocks=1 00:41:43.708 --rc geninfo_unexecuted_blocks=1 00:41:43.708 00:41:43.708 ' 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:43.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:43.708 --rc genhtml_branch_coverage=1 00:41:43.708 --rc genhtml_function_coverage=1 00:41:43.708 --rc genhtml_legend=1 00:41:43.708 --rc geninfo_all_blocks=1 00:41:43.708 --rc geninfo_unexecuted_blocks=1 00:41:43.708 00:41:43.708 ' 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:43.708 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:41:43.968 03:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:49.239 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:49.239 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:49.239 Found net devices under 0000:af:00.0: cvl_0_0 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:49.239 Found net devices under 0000:af:00.1: cvl_0_1 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:49.239 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:49.240 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:49.240 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:49.240 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:49.240 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:49.240 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:49.240 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:49.240 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:49.240 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:49.240 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:49.240 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:49.240 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:49.240 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:49.240 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:49.240 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:49.240 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:49.499 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:49.499 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:49.499 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:49.499 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:49.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:49.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:41:49.499 00:41:49.499 --- 10.0.0.2 ping statistics --- 00:41:49.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:49.499 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:41:49.499 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:49.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:49.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:41:49.499 00:41:49.499 --- 10.0.0.1 ping statistics --- 00:41:49.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:49.499 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:41:49.499 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:49.499 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:41:49.499 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:49.499 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:49.499 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:49.499 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:49.499 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:49.499 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:49.499 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:49.499 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:41:49.499 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:49.499 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:49.499 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:49.499 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2976017 00:41:49.499 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2976017 00:41:49.499 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:41:49.499 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2976017 ']' 00:41:49.499 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:49.499 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:49.499 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:49.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:49.499 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:49.499 03:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:49.499 [2024-12-13 03:52:50.607658] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:49.499 [2024-12-13 03:52:50.609744] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:41:49.499 [2024-12-13 03:52:50.609826] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:49.758 [2024-12-13 03:52:50.727404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:49.758 [2024-12-13 03:52:50.831982] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:49.758 [2024-12-13 03:52:50.832023] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:49.758 [2024-12-13 03:52:50.832036] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:49.758 [2024-12-13 03:52:50.832061] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:49.758 [2024-12-13 03:52:50.832072] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:49.758 [2024-12-13 03:52:50.833454] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:50.016 [2024-12-13 03:52:51.147496] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:50.016 [2024-12-13 03:52:51.147734] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:50.275 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:50.275 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:41:50.275 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:50.275 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:50.275 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:50.275 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:50.275 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:41:50.275 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:41:50.275 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.275 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:50.275 [2024-12-13 03:52:51.466262] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:50.275 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.275 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:41:50.275 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.275 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:50.275 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.275 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:50.275 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.275 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:50.275 [2024-12-13 03:52:51.482458] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:50.535 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.535 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:50.535 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.535 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:50.535 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.535 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:41:50.535 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.535 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:50.535 malloc0 00:41:50.535 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.535 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:41:50.535 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.535 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:50.535 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.535 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:41:50.535 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:41:50.535 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:41:50.535 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:41:50.535 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:50.535 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:50.535 { 00:41:50.535 "params": { 00:41:50.535 "name": "Nvme$subsystem", 00:41:50.535 "trtype": "$TEST_TRANSPORT", 00:41:50.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:50.535 "adrfam": "ipv4", 00:41:50.535 "trsvcid": "$NVMF_PORT", 00:41:50.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:50.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:50.535 "hdgst": ${hdgst:-false}, 00:41:50.535 "ddgst": ${ddgst:-false} 00:41:50.535 }, 00:41:50.535 "method": "bdev_nvme_attach_controller" 00:41:50.535 } 00:41:50.535 EOF 00:41:50.535 )") 00:41:50.535 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:41:50.535 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:41:50.535 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:41:50.535 03:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:50.535 "params": { 00:41:50.535 "name": "Nvme1", 00:41:50.535 "trtype": "tcp", 00:41:50.535 "traddr": "10.0.0.2", 00:41:50.535 "adrfam": "ipv4", 00:41:50.535 "trsvcid": "4420", 00:41:50.535 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:50.535 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:50.535 "hdgst": false, 00:41:50.535 "ddgst": false 00:41:50.535 }, 00:41:50.535 "method": "bdev_nvme_attach_controller" 00:41:50.535 }' 00:41:50.535 [2024-12-13 03:52:51.633064] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:41:50.535 [2024-12-13 03:52:51.633144] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2976214 ] 00:41:50.794 [2024-12-13 03:52:51.746446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:50.794 [2024-12-13 03:52:51.856221] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:51.361 Running I/O for 10 seconds... 00:41:53.234 7271.00 IOPS, 56.80 MiB/s [2024-12-13T02:52:55.819Z] 7282.00 IOPS, 56.89 MiB/s [2024-12-13T02:52:56.755Z] 7314.67 IOPS, 57.15 MiB/s [2024-12-13T02:52:57.692Z] 7319.50 IOPS, 57.18 MiB/s [2024-12-13T02:52:58.629Z] 7335.80 IOPS, 57.31 MiB/s [2024-12-13T02:52:59.566Z] 7344.00 IOPS, 57.38 MiB/s [2024-12-13T02:53:00.504Z] 7353.00 IOPS, 57.45 MiB/s [2024-12-13T02:53:01.441Z] 7334.12 IOPS, 57.30 MiB/s [2024-12-13T02:53:02.820Z] 7326.67 IOPS, 57.24 MiB/s [2024-12-13T02:53:02.820Z] 7308.60 IOPS, 57.10 MiB/s 00:42:01.611 Latency(us) 00:42:01.611 [2024-12-13T02:53:02.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:01.611 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:42:01.611 Verification LBA range: start 0x0 length 0x1000 00:42:01.611 Nvme1n1 : 10.01 7311.50 57.12 0.00 0.00 17457.55 2324.97 25090.93 00:42:01.611 [2024-12-13T02:53:02.820Z] =================================================================================================================== 00:42:01.611 [2024-12-13T02:53:02.820Z] Total : 7311.50 57.12 0.00 0.00 17457.55 2324.97 25090.93 00:42:02.179 03:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2978036 00:42:02.179 03:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:42:02.179 03:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:02.179 03:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:42:02.179 03:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:42:02.179 03:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:42:02.179 03:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:42:02.179 03:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:02.179 03:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:02.179 { 00:42:02.179 "params": { 00:42:02.179 "name": "Nvme$subsystem", 00:42:02.179 "trtype": "$TEST_TRANSPORT", 00:42:02.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:02.179 "adrfam": "ipv4", 00:42:02.179 "trsvcid": "$NVMF_PORT", 00:42:02.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:02.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:02.179 "hdgst": ${hdgst:-false}, 00:42:02.179 "ddgst": ${ddgst:-false} 00:42:02.179 }, 00:42:02.179 "method": "bdev_nvme_attach_controller" 00:42:02.179 } 00:42:02.179 EOF 00:42:02.179 )") 00:42:02.179 [2024-12-13 03:53:03.306029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.179 [2024-12-13 03:53:03.306066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.179 03:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:42:02.179 03:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:42:02.179 [2024-12-13 03:53:03.314021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.179 [2024-12-13 03:53:03.314047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.179 03:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:42:02.179 03:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:02.179 "params": { 00:42:02.179 "name": "Nvme1", 00:42:02.179 "trtype": "tcp", 00:42:02.179 "traddr": "10.0.0.2", 00:42:02.179 "adrfam": "ipv4", 00:42:02.179 "trsvcid": "4420", 00:42:02.179 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:02.179 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:02.179 "hdgst": false, 00:42:02.179 "ddgst": false 00:42:02.179 }, 00:42:02.179 "method": "bdev_nvme_attach_controller" 00:42:02.179 }' 00:42:02.179 [2024-12-13 03:53:03.321981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.179 [2024-12-13 03:53:03.322004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.179 [2024-12-13 03:53:03.329996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.179 [2024-12-13 03:53:03.330016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.179 [2024-12-13 03:53:03.337974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.179 [2024-12-13 03:53:03.337992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.179 [2024-12-13 03:53:03.349972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.179 [2024-12-13 03:53:03.349993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.179 [2024-12-13 03:53:03.357992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.179 [2024-12-13 03:53:03.358011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.179 [2024-12-13 03:53:03.365976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.179 [2024-12-13 03:53:03.365994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.179 [2024-12-13 03:53:03.373878] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:42:02.179 [2024-12-13 03:53:03.373954] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2978036 ] 00:42:02.179 [2024-12-13 03:53:03.373956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.179 [2024-12-13 03:53:03.373974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.179 [2024-12-13 03:53:03.381979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.179 [2024-12-13 03:53:03.381998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.438 [2024-12-13 03:53:03.389959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.439 [2024-12-13 03:53:03.389977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.439 [2024-12-13 03:53:03.397989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.439 [2024-12-13 03:53:03.398009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.439 [2024-12-13 03:53:03.405977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.439 [2024-12-13 03:53:03.405996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.439 [2024-12-13 03:53:03.413977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.439 [2024-12-13 03:53:03.413995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.439 [2024-12-13 03:53:03.421973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.439 [2024-12-13 03:53:03.421992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.439 [2024-12-13 03:53:03.429985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.439 [2024-12-13 03:53:03.430004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.439 [2024-12-13 03:53:03.441966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.439 [2024-12-13 03:53:03.441986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.439 [2024-12-13 03:53:03.453977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.439 [2024-12-13 03:53:03.453996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.439 [2024-12-13 03:53:03.465961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.439 [2024-12-13 03:53:03.465980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.439 [2024-12-13 03:53:03.477973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.439 [2024-12-13 03:53:03.477992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.439 [2024-12-13 03:53:03.486321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:02.439 [2024-12-13 03:53:03.489976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.439 [2024-12-13 03:53:03.489996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.439 [2024-12-13 03:53:03.501976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.439 [2024-12-13 03:53:03.502015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.439 [2024-12-13 03:53:03.513984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.439 [2024-12-13 03:53:03.514004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.439 [2024-12-13 03:53:03.525977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.439 [2024-12-13 03:53:03.525996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.439 [2024-12-13 03:53:03.537963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.439 [2024-12-13 03:53:03.537982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.439 [2024-12-13 03:53:03.549984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.439 [2024-12-13 03:53:03.550003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.439 [2024-12-13 03:53:03.561963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.439 [2024-12-13 03:53:03.561983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.439 [2024-12-13 03:53:03.573972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.439 [2024-12-13 03:53:03.573991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.439 [2024-12-13 03:53:03.585973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.439 [2024-12-13 03:53:03.585991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.439 [2024-12-13 03:53:03.595278] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:42:02.439 [2024-12-13 03:53:03.597968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.439 [2024-12-13 03:53:03.597987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.439 [2024-12-13 03:53:03.609990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.439 [2024-12-13 03:53:03.610011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.439 [2024-12-13 03:53:03.621971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.439 [2024-12-13 03:53:03.621989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.439 [2024-12-13 03:53:03.633959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.439 [2024-12-13 03:53:03.633977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.439 [2024-12-13 03:53:03.645979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.439 [2024-12-13 03:53:03.645998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.698 [2024-12-13 03:53:03.657957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.698 [2024-12-13 03:53:03.657976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.698 [2024-12-13 03:53:03.669992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.698 [2024-12-13 03:53:03.670011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.698 [2024-12-13 03:53:03.681985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.698 [2024-12-13 03:53:03.682006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.698 [2024-12-13 03:53:03.693993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.698 [2024-12-13 03:53:03.694014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.698 [2024-12-13 03:53:03.705974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.698 [2024-12-13 03:53:03.705992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.698 [2024-12-13 03:53:03.717974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.698 [2024-12-13 03:53:03.717991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.698 [2024-12-13 03:53:03.729962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.698 [2024-12-13 03:53:03.729981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.698 [2024-12-13 03:53:03.741970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.698 [2024-12-13 03:53:03.741987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.698 [2024-12-13 03:53:03.753960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.698 [2024-12-13 03:53:03.753980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.698 [2024-12-13 03:53:03.765978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.698 [2024-12-13 03:53:03.765996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.698 [2024-12-13 03:53:03.777975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.698 [2024-12-13 03:53:03.777994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.698 [2024-12-13 03:53:03.789959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.698 [2024-12-13 03:53:03.789978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.698 [2024-12-13 03:53:03.801973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.698 [2024-12-13 03:53:03.801992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.698 [2024-12-13 03:53:03.813976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.698 [2024-12-13 03:53:03.813996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.698 [2024-12-13 03:53:03.825968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.698 [2024-12-13 03:53:03.825989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.698 [2024-12-13 03:53:03.837987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.698 [2024-12-13 03:53:03.838007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.698 [2024-12-13 03:53:03.849958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.698 [2024-12-13 03:53:03.849977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.698 [2024-12-13 03:53:03.861970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.698 [2024-12-13 03:53:03.861988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.698 [2024-12-13 03:53:03.873970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.698 [2024-12-13 03:53:03.873993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.698 [2024-12-13 03:53:03.885956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.698 [2024-12-13 03:53:03.885975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.698 [2024-12-13 03:53:03.897982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.698 [2024-12-13 03:53:03.898001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.957 [2024-12-13 03:53:03.909978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.957 [2024-12-13 03:53:03.909998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.957 [2024-12-13 03:53:03.921967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.958 [2024-12-13 03:53:03.921986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.958 [2024-12-13 03:53:03.933981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.958 [2024-12-13 03:53:03.934002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.958 [2024-12-13 03:53:03.945963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.958 [2024-12-13 03:53:03.945984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.958 [2024-12-13 03:53:03.957979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.958 [2024-12-13 03:53:03.957999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.958 [2024-12-13 03:53:03.969986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.958 [2024-12-13 03:53:03.970006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.958 [2024-12-13 03:53:03.981974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.958 [2024-12-13 03:53:03.981994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.958 [2024-12-13 03:53:03.993972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.958 [2024-12-13 03:53:03.993991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.958 [2024-12-13 03:53:04.005990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.958 [2024-12-13 03:53:04.006010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.958 [2024-12-13 03:53:04.017962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.958 [2024-12-13 03:53:04.017987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.958 [2024-12-13 03:53:04.029981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.958 [2024-12-13 03:53:04.030001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.958 [2024-12-13 03:53:04.041960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.958 [2024-12-13 03:53:04.041980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.958 [2024-12-13 03:53:04.053979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.958 [2024-12-13 03:53:04.054001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.958 [2024-12-13 03:53:04.065974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.958 [2024-12-13 03:53:04.065995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.958 [2024-12-13 03:53:04.077960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.958 [2024-12-13 03:53:04.077979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.958 [2024-12-13 03:53:04.089968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.958 [2024-12-13 03:53:04.089986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.958 [2024-12-13 03:53:04.101974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.958 [2024-12-13 03:53:04.101997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.958 [2024-12-13 03:53:04.113980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.958 [2024-12-13 03:53:04.114000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.958 [2024-12-13 03:53:04.125974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.958 [2024-12-13 03:53:04.125993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:02.958 [2024-12-13 03:53:04.137961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:02.958 [2024-12-13 03:53:04.137981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.217 [2024-12-13 03:53:04.189234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.217 [2024-12-13 03:53:04.189261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.217 [2024-12-13 03:53:04.198035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.217 [2024-12-13 03:53:04.198058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.217 Running I/O for 5 seconds... 00:42:03.217 [2024-12-13 03:53:04.216772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.217 [2024-12-13 03:53:04.216797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.217 [2024-12-13 03:53:04.231571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.217 [2024-12-13 03:53:04.231596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.217 [2024-12-13 03:53:04.249026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.217 [2024-12-13 03:53:04.249051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.217 [2024-12-13 03:53:04.261366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.217 [2024-12-13 03:53:04.261390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.217 [2024-12-13 03:53:04.276090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.217 [2024-12-13 03:53:04.276114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.217 [2024-12-13 03:53:04.292525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.217 [2024-12-13 03:53:04.292549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.217 [2024-12-13 03:53:04.308991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.217 [2024-12-13 03:53:04.309015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.217 [2024-12-13 03:53:04.322553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.217 [2024-12-13 03:53:04.322579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.217 [2024-12-13 03:53:04.339866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.217 [2024-12-13 03:53:04.339898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.217 [2024-12-13 03:53:04.355842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.217 [2024-12-13 03:53:04.355866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.217 [2024-12-13 03:53:04.373084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.217 [2024-12-13 03:53:04.373107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.217 [2024-12-13 03:53:04.386988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.217 [2024-12-13 03:53:04.387011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.217 [2024-12-13 03:53:04.403959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.217 [2024-12-13 03:53:04.403983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.217 [2024-12-13 03:53:04.419724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.217 [2024-12-13 03:53:04.419753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.476 [2024-12-13 03:53:04.436945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.476 [2024-12-13 03:53:04.436970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.476 [2024-12-13 03:53:04.452973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.476 [2024-12-13 03:53:04.452997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.476 [2024-12-13 03:53:04.466107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.476 [2024-12-13 03:53:04.466131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.476 [2024-12-13 03:53:04.478826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.476 [2024-12-13 03:53:04.478850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.476 [2024-12-13 03:53:04.495584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.476 [2024-12-13 03:53:04.495608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.476 [2024-12-13 03:53:04.512441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.476 [2024-12-13 03:53:04.512466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.476 [2024-12-13 03:53:04.528082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.476 [2024-12-13 03:53:04.528106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.476 [2024-12-13 03:53:04.545176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.476 [2024-12-13 03:53:04.545201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.476 [2024-12-13 03:53:04.558444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.476 [2024-12-13 03:53:04.558467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.476 [2024-12-13 03:53:04.575738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.476 [2024-12-13 03:53:04.575763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.476 [2024-12-13 03:53:04.591877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.476 [2024-12-13 03:53:04.591901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.476 [2024-12-13 03:53:04.609401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.476 [2024-12-13 03:53:04.609426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.476 [2024-12-13 03:53:04.622490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.476 [2024-12-13 03:53:04.622514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.476 [2024-12-13 03:53:04.639605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.476 [2024-12-13 03:53:04.639629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.476 [2024-12-13 03:53:04.656612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.476 [2024-12-13 03:53:04.656636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.476 [2024-12-13 03:53:04.673140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.476 [2024-12-13 03:53:04.673163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.736 [2024-12-13 03:53:04.686527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.736 [2024-12-13 03:53:04.686552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.736 [2024-12-13 03:53:04.704009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.736 [2024-12-13 03:53:04.704033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.736 [2024-12-13 03:53:04.720480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.736 [2024-12-13 03:53:04.720506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.736 [2024-12-13 03:53:04.735721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.736 [2024-12-13 03:53:04.735745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.736 [2024-12-13 03:53:04.753584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.736 [2024-12-13 03:53:04.753608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.736 [2024-12-13 03:53:04.766802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.736 [2024-12-13 03:53:04.766828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.736 [2024-12-13 03:53:04.784515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.736 [2024-12-13 03:53:04.784538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.736 [2024-12-13 03:53:04.797534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.736 [2024-12-13 03:53:04.797557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.736 [2024-12-13 03:53:04.810968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.736 [2024-12-13 03:53:04.810991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.736 [2024-12-13 03:53:04.828075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.736 [2024-12-13 03:53:04.828099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.736 [2024-12-13 03:53:04.842076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.736 [2024-12-13 03:53:04.842099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.736 [2024-12-13 03:53:04.854356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.736 [2024-12-13 03:53:04.854380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.736 [2024-12-13 03:53:04.866789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.736 [2024-12-13 03:53:04.866813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.736 [2024-12-13 03:53:04.883973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.736 [2024-12-13 03:53:04.883997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.736 [2024-12-13 03:53:04.900071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.736 [2024-12-13 03:53:04.900095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.736 [2024-12-13 03:53:04.916465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.736 [2024-12-13 03:53:04.916489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.736 [2024-12-13 03:53:04.931485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.736 [2024-12-13 03:53:04.931509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.995 [2024-12-13 03:53:04.948846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.995 [2024-12-13 03:53:04.948870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.995 [2024-12-13 03:53:04.961740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.995 [2024-12-13 03:53:04.961764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.995 [2024-12-13 03:53:04.974881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.995 [2024-12-13 03:53:04.974905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.995 [2024-12-13 03:53:04.992274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.995 [2024-12-13 03:53:04.992297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.995 [2024-12-13 03:53:05.005985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.995 [2024-12-13 03:53:05.006009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.995 [2024-12-13 03:53:05.018697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.995 [2024-12-13 03:53:05.018720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.995 [2024-12-13 03:53:05.035890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.995 [2024-12-13 03:53:05.035913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.995 [2024-12-13 03:53:05.051800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.995 [2024-12-13 03:53:05.051824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.995 [2024-12-13 03:53:05.068647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.995 [2024-12-13 03:53:05.068671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.995 [2024-12-13 03:53:05.084873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.995 [2024-12-13 03:53:05.084898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.995 [2024-12-13 03:53:05.100644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.995 [2024-12-13 03:53:05.100668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.995 [2024-12-13 03:53:05.117046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.995 [2024-12-13 03:53:05.117070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.995 [2024-12-13 03:53:05.130971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.995 [2024-12-13 03:53:05.130995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.995 [2024-12-13 03:53:05.148683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.995 [2024-12-13 03:53:05.148707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.995 [2024-12-13 03:53:05.161118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.995 [2024-12-13 03:53:05.161141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.995 [2024-12-13 03:53:05.175969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.995 [2024-12-13 03:53:05.175993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:03.995 [2024-12-13 03:53:05.192588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:03.995 [2024-12-13 03:53:05.192612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.256 [2024-12-13 03:53:05.208009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.256 [2024-12-13 03:53:05.208033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.256 14264.00 IOPS, 111.44 MiB/s [2024-12-13T02:53:05.465Z] [2024-12-13 03:53:05.224413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.256 [2024-12-13 03:53:05.224437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.256 [2024-12-13 03:53:05.240636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.257 [2024-12-13 03:53:05.240660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.257 [2024-12-13 03:53:05.254260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.257 [2024-12-13 03:53:05.254282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.257 [2024-12-13 03:53:05.271487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.257 [2024-12-13 03:53:05.271511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.257 [2024-12-13 03:53:05.287975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.257 [2024-12-13 03:53:05.287999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.257 [2024-12-13 03:53:05.305174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.257 [2024-12-13 03:53:05.305198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.257 [2024-12-13 03:53:05.317322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.257 [2024-12-13 03:53:05.317346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.257 [2024-12-13 03:53:05.332057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.257 [2024-12-13 03:53:05.332087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.257 [2024-12-13 03:53:05.349011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.257 [2024-12-13 03:53:05.349034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.257 [2024-12-13 03:53:05.363733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.257 [2024-12-13 03:53:05.363756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.257 [2024-12-13 03:53:05.380856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.257 [2024-12-13 03:53:05.380880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.257 [2024-12-13 03:53:05.392882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.257 [2024-12-13 03:53:05.392906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.257 [2024-12-13 03:53:05.409161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.257 [2024-12-13 03:53:05.409184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.257 [2024-12-13 03:53:05.421407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.257 [2024-12-13 03:53:05.421431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.257 [2024-12-13 03:53:05.436819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.257 [2024-12-13 03:53:05.436844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.257 [2024-12-13 03:53:05.452181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.257 [2024-12-13 03:53:05.452206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.518 [2024-12-13 03:53:05.469274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.518 [2024-12-13 03:53:05.469299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.518 [2024-12-13 03:53:05.483485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.518 [2024-12-13 03:53:05.483509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.518 [2024-12-13 03:53:05.501118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.518 [2024-12-13 03:53:05.501141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.518 [2024-12-13 03:53:05.514569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.518 [2024-12-13 03:53:05.514593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.518 [2024-12-13 03:53:05.531665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.518 [2024-12-13 03:53:05.531688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.518 [2024-12-13 03:53:05.547900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.518 [2024-12-13 03:53:05.547930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.518 [2024-12-13 03:53:05.564486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.518 [2024-12-13 03:53:05.564509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.518 [2024-12-13 03:53:05.580667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.518 [2024-12-13 03:53:05.580696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.518 [2024-12-13 03:53:05.595476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.518 [2024-12-13 03:53:05.595499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.518 [2024-12-13 03:53:05.613103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.518 [2024-12-13 03:53:05.613128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.518 [2024-12-13 03:53:05.626107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.518 [2024-12-13 03:53:05.626130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.518 [2024-12-13 03:53:05.640303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.518 [2024-12-13 03:53:05.640330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.518 [2024-12-13 03:53:05.657273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.518 [2024-12-13 03:53:05.657297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.518 [2024-12-13 03:53:05.671430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.518 [2024-12-13 03:53:05.671455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.518 [2024-12-13 03:53:05.688368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.518 [2024-12-13 03:53:05.688393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.518 [2024-12-13 03:53:05.703336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.518 [2024-12-13 03:53:05.703360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.518 [2024-12-13 03:53:05.720926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.518 [2024-12-13 03:53:05.720952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.777 [2024-12-13 03:53:05.736833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.777 [2024-12-13 03:53:05.736858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.777 [2024-12-13 03:53:05.751297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.777 [2024-12-13 03:53:05.751321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.777 [2024-12-13 03:53:05.768976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.777 [2024-12-13 03:53:05.769000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.777 [2024-12-13 03:53:05.782930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.777 [2024-12-13 03:53:05.782954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.777 [2024-12-13 03:53:05.800008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.777 [2024-12-13 03:53:05.800032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.777 [2024-12-13 03:53:05.815952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.777 [2024-12-13 03:53:05.815976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.777 [2024-12-13 03:53:05.833302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.777 [2024-12-13 03:53:05.833327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.777 [2024-12-13 03:53:05.846540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.777 [2024-12-13 03:53:05.846566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.777 [2024-12-13 03:53:05.864040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.777 [2024-12-13 03:53:05.864064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.777 [2024-12-13 03:53:05.879007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.777 [2024-12-13 03:53:05.879036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.777 [2024-12-13 03:53:05.895717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.777 [2024-12-13 03:53:05.895743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.777 [2024-12-13 03:53:05.911858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.777 [2024-12-13 03:53:05.911883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.777 [2024-12-13 03:53:05.928775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.777 [2024-12-13 03:53:05.928800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.777 [2024-12-13 03:53:05.943616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.777 [2024-12-13 03:53:05.943640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.777 [2024-12-13 03:53:05.960868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.777 [2024-12-13 03:53:05.960892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:04.777 [2024-12-13 03:53:05.973868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:04.777 [2024-12-13 03:53:05.973893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.036 [2024-12-13 03:53:05.986419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.036 [2024-12-13 03:53:05.986444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.036 [2024-12-13 03:53:06.003541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.036 [2024-12-13 03:53:06.003565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.036 [2024-12-13 03:53:06.020583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.036 [2024-12-13 03:53:06.020608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.036 [2024-12-13 03:53:06.035277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.036 [2024-12-13 03:53:06.035301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.036 [2024-12-13 03:53:06.052495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.036 [2024-12-13 03:53:06.052520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.036 [2024-12-13 03:53:06.068312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.036 [2024-12-13 03:53:06.068336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.036 [2024-12-13 03:53:06.085576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.036 [2024-12-13 03:53:06.085600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.036 [2024-12-13 03:53:06.098203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.036 [2024-12-13 03:53:06.098227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.036 [2024-12-13 03:53:06.110816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.036 [2024-12-13 03:53:06.110839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.036 [2024-12-13 03:53:06.127776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.036 [2024-12-13 03:53:06.127801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.036 [2024-12-13 03:53:06.144237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.036 [2024-12-13 03:53:06.144261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.036 [2024-12-13 03:53:06.161013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.036 [2024-12-13 03:53:06.161037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.036 [2024-12-13 03:53:06.174113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.036 [2024-12-13 03:53:06.174141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.036 [2024-12-13 03:53:06.188028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.036 [2024-12-13 03:53:06.188051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.036 [2024-12-13 03:53:06.204369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.036 [2024-12-13 03:53:06.204393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.036 14323.00 IOPS, 111.90 MiB/s [2024-12-13T02:53:06.245Z] [2024-12-13 03:53:06.220096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.036 [2024-12-13 03:53:06.220120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.036 [2024-12-13 03:53:06.237194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.036 [2024-12-13 03:53:06.237218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.295 [2024-12-13 03:53:06.250979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.295 [2024-12-13 03:53:06.251003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.295 [2024-12-13 03:53:06.268128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.295 [2024-12-13 03:53:06.268151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.295 [2024-12-13 03:53:06.283883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.295 [2024-12-13 03:53:06.283906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.295 [2024-12-13 03:53:06.301374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.295 [2024-12-13 03:53:06.301398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.295 [2024-12-13 03:53:06.314285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.295 [2024-12-13 03:53:06.314308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.295 [2024-12-13 03:53:06.327026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.295 [2024-12-13 03:53:06.327057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.295 [2024-12-13 03:53:06.344249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.295 [2024-12-13 03:53:06.344274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.295 [2024-12-13 03:53:06.360216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.295 [2024-12-13 03:53:06.360239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.295 [2024-12-13 03:53:06.376651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.295 [2024-12-13 03:53:06.376674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.295 [2024-12-13 03:53:06.392445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.295 [2024-12-13 03:53:06.392469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.295 [2024-12-13 03:53:06.405030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.295 [2024-12-13 03:53:06.405053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.295 [2024-12-13 03:53:06.420112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.295 [2024-12-13 03:53:06.420135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.295 [2024-12-13 03:53:06.436170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.295 [2024-12-13 03:53:06.436194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.295 [2024-12-13 03:53:06.452182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.295 [2024-12-13 03:53:06.452205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.295 [2024-12-13 03:53:06.468864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.295 [2024-12-13 03:53:06.468888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.295 [2024-12-13 03:53:06.484712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.295 [2024-12-13 03:53:06.484735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.295 [2024-12-13 03:53:06.500426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.295 [2024-12-13 03:53:06.500450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.554 [2024-12-13 03:53:06.517039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.554 [2024-12-13 03:53:06.517063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.554 [2024-12-13 03:53:06.530883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.554 [2024-12-13 03:53:06.530906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.554 [2024-12-13 03:53:06.548261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.554 [2024-12-13 03:53:06.548285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.554 [2024-12-13 03:53:06.563357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.554 [2024-12-13 03:53:06.563380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.554 [2024-12-13 03:53:06.580537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.554 [2024-12-13 03:53:06.580561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.554 [2024-12-13 03:53:06.596058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.554 [2024-12-13 03:53:06.596081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.554 [2024-12-13 03:53:06.612681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.554 [2024-12-13 03:53:06.612705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.554 [2024-12-13 03:53:06.626894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.554 [2024-12-13 03:53:06.626924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.554 [2024-12-13 03:53:06.644190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.554 [2024-12-13 03:53:06.644215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.554 [2024-12-13 03:53:06.657394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.554 [2024-12-13 03:53:06.657419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.554 [2024-12-13 03:53:06.672487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.554 [2024-12-13 03:53:06.672511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.554 [2024-12-13 03:53:06.689289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.554 [2024-12-13 03:53:06.689313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.554 [2024-12-13 03:53:06.701676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.554 [2024-12-13 03:53:06.701699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.554 [2024-12-13 03:53:06.716306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.554 [2024-12-13 03:53:06.716330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.554 [2024-12-13 03:53:06.732942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.554 [2024-12-13 03:53:06.732965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.554 [2024-12-13 03:53:06.748686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.554 [2024-12-13 03:53:06.748710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.813 [2024-12-13 03:53:06.764454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.813 [2024-12-13 03:53:06.764479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.813 [2024-12-13 03:53:06.781036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.813 [2024-12-13 03:53:06.781059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.813 [2024-12-13 03:53:06.794267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.813 [2024-12-13 03:53:06.794289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.813 [2024-12-13 03:53:06.807169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.813 [2024-12-13 03:53:06.807192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.813 [2024-12-13 03:53:06.824768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.813 [2024-12-13 03:53:06.824791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.813 [2024-12-13 03:53:06.838077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.813 [2024-12-13 03:53:06.838101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.813 [2024-12-13 03:53:06.852401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.813 [2024-12-13 03:53:06.852424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.813 [2024-12-13 03:53:06.869178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.813 [2024-12-13 03:53:06.869203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.813 [2024-12-13 03:53:06.882546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.813 [2024-12-13 03:53:06.882569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.813 [2024-12-13 03:53:06.899291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.813 [2024-12-13 03:53:06.899316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.813 [2024-12-13 03:53:06.916579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.813 [2024-12-13 03:53:06.916604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.813 [2024-12-13 03:53:06.932095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.813 [2024-12-13 03:53:06.932119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.813 [2024-12-13 03:53:06.948556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.813 [2024-12-13 03:53:06.948580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.813 [2024-12-13 03:53:06.964677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.813 [2024-12-13 03:53:06.964701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.813 [2024-12-13 03:53:06.980262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.813 [2024-12-13 03:53:06.980285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.813 [2024-12-13 03:53:06.996355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.813 [2024-12-13 03:53:06.996379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:05.813 [2024-12-13 03:53:07.012039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:05.813 [2024-12-13 03:53:07.012063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.073 [2024-12-13 03:53:07.028742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.073 [2024-12-13 03:53:07.028766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.073 [2024-12-13 03:53:07.044525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.073 [2024-12-13 03:53:07.044548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.073 [2024-12-13 03:53:07.057964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.073 [2024-12-13 03:53:07.057989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.073 [2024-12-13 03:53:07.072132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.073 [2024-12-13 03:53:07.072156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.073 [2024-12-13 03:53:07.088678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.073 [2024-12-13 03:53:07.088704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.073 [2024-12-13 03:53:07.105255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.073 [2024-12-13 03:53:07.105280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.073 [2024-12-13 03:53:07.119262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.073 [2024-12-13 03:53:07.119286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.073 [2024-12-13 03:53:07.136568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.073 [2024-12-13 03:53:07.136592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.073 [2024-12-13 03:53:07.151692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.073 [2024-12-13 03:53:07.151718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.073 [2024-12-13 03:53:07.169285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.073 [2024-12-13 03:53:07.169309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.073 [2024-12-13 03:53:07.182419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.073 [2024-12-13 03:53:07.182443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.073 [2024-12-13 03:53:07.199989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.073 [2024-12-13 03:53:07.200013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.073 14337.00 IOPS, 112.01 MiB/s [2024-12-13T02:53:07.282Z] [2024-12-13 03:53:07.215298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.073 [2024-12-13 03:53:07.215323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.073 [2024-12-13 03:53:07.233169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.073 [2024-12-13 03:53:07.233193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.073 [2024-12-13 03:53:07.246385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.073 [2024-12-13 03:53:07.246409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.073 [2024-12-13 03:53:07.263663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.073 [2024-12-13 03:53:07.263688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.073 [2024-12-13 03:53:07.280771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.073 [2024-12-13 03:53:07.280796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.332 [2024-12-13 03:53:07.295757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.332 [2024-12-13 03:53:07.295783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.332 [2024-12-13 03:53:07.312665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.332 [2024-12-13 03:53:07.312690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.332 [2024-12-13 03:53:07.327770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.332 [2024-12-13 03:53:07.327804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.332 [2024-12-13 03:53:07.345720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.332 [2024-12-13 03:53:07.345749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.332 [2024-12-13 03:53:07.358602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.332 [2024-12-13 03:53:07.358626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.332 [2024-12-13 03:53:07.375578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.332 [2024-12-13 03:53:07.375601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.332 [2024-12-13 03:53:07.391782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.332 [2024-12-13 03:53:07.391806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.332 [2024-12-13 03:53:07.408872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.332 [2024-12-13 03:53:07.408896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.332 [2024-12-13 03:53:07.423602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.332 [2024-12-13 03:53:07.423626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.332 [2024-12-13 03:53:07.440795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.332 [2024-12-13 03:53:07.440819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.332 [2024-12-13 03:53:07.455306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.332 [2024-12-13 03:53:07.455330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.332 [2024-12-13 03:53:07.472316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.332 [2024-12-13 03:53:07.472340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.332 [2024-12-13 03:53:07.488829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.332 [2024-12-13 03:53:07.488853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.332 [2024-12-13 03:53:07.503153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.332 [2024-12-13 03:53:07.503176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.332 [2024-12-13 03:53:07.519963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.332 [2024-12-13 03:53:07.519986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.332 [2024-12-13 03:53:07.537025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.332 [2024-12-13 03:53:07.537048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.592 [2024-12-13 03:53:07.551113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.592 [2024-12-13 03:53:07.551138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.592 [2024-12-13 03:53:07.568169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.592 [2024-12-13 03:53:07.568193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.592 [2024-12-13 03:53:07.585209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.592 [2024-12-13 03:53:07.585232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.592 [2024-12-13 03:53:07.597518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.592 [2024-12-13 03:53:07.597542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.592 [2024-12-13 03:53:07.610893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.592 [2024-12-13 03:53:07.610923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.592 [2024-12-13 03:53:07.627987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.592 [2024-12-13 03:53:07.628011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.592 [2024-12-13 03:53:07.644794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.592 [2024-12-13 03:53:07.644823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.592 [2024-12-13 03:53:07.659180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.592 [2024-12-13 03:53:07.659204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.592 [2024-12-13 03:53:07.676047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.592 [2024-12-13 03:53:07.676071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.592 [2024-12-13 03:53:07.693234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.592 [2024-12-13 03:53:07.693257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.592 [2024-12-13 03:53:07.705513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.592 [2024-12-13 03:53:07.705537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.592 [2024-12-13 03:53:07.720498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.592 [2024-12-13 03:53:07.720523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.592 [2024-12-13 03:53:07.737136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.592 [2024-12-13 03:53:07.737160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.592 [2024-12-13 03:53:07.751351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.592 [2024-12-13 03:53:07.751376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.592 [2024-12-13 03:53:07.768964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.592 [2024-12-13 03:53:07.768988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.592 [2024-12-13 03:53:07.781246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.592 [2024-12-13 03:53:07.781270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.592 [2024-12-13 03:53:07.796968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.592 [2024-12-13 03:53:07.796993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.851 [2024-12-13 03:53:07.812141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.851 [2024-12-13 03:53:07.812165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.851 [2024-12-13 03:53:07.828928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.851 [2024-12-13 03:53:07.828952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.851 [2024-12-13 03:53:07.843724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.851 [2024-12-13 03:53:07.843748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.851 [2024-12-13 03:53:07.861341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.851 [2024-12-13 03:53:07.861365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.851 [2024-12-13 03:53:07.874446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.851 [2024-12-13 03:53:07.874469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.851 [2024-12-13 03:53:07.891611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.851 [2024-12-13 03:53:07.891635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.851 [2024-12-13 03:53:07.908458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.851 [2024-12-13 03:53:07.908482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.851 [2024-12-13 03:53:07.923719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.851 [2024-12-13 03:53:07.923743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.851 [2024-12-13 03:53:07.941591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.851 [2024-12-13 03:53:07.941619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.851 [2024-12-13 03:53:07.954932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.851 [2024-12-13 03:53:07.954955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.851 [2024-12-13 03:53:07.972347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.851 [2024-12-13 03:53:07.972370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.851 [2024-12-13 03:53:07.985480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.851 [2024-12-13 03:53:07.985504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.851 [2024-12-13 03:53:08.000162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.851 [2024-12-13 03:53:08.000185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.851 [2024-12-13 03:53:08.016495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.851 [2024-12-13 03:53:08.016519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.851 [2024-12-13 03:53:08.033136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.852 [2024-12-13 03:53:08.033159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:06.852 [2024-12-13 03:53:08.046739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:06.852 [2024-12-13 03:53:08.046764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.111 [2024-12-13 03:53:08.064364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.111 [2024-12-13 03:53:08.064400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.111 [2024-12-13 03:53:08.077781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.111 [2024-12-13 03:53:08.077805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.111 [2024-12-13 03:53:08.092309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.111 [2024-12-13 03:53:08.092333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.111 [2024-12-13 03:53:08.109004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.111 [2024-12-13 03:53:08.109029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.111 [2024-12-13 03:53:08.120884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.111 [2024-12-13 03:53:08.120908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.111 [2024-12-13 03:53:08.136704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.111 [2024-12-13 03:53:08.136727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.111 [2024-12-13 03:53:08.153292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.111 [2024-12-13 03:53:08.153316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.111 [2024-12-13 03:53:08.166267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.111 [2024-12-13 03:53:08.166291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.111 [2024-12-13 03:53:08.179090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.111 [2024-12-13 03:53:08.179115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.111 [2024-12-13 03:53:08.196433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.111 [2024-12-13 03:53:08.196457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.111 [2024-12-13 03:53:08.209521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.111 [2024-12-13 03:53:08.209544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.111 14330.75 IOPS, 111.96 MiB/s [2024-12-13T02:53:08.320Z] [2024-12-13 03:53:08.224088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.111 [2024-12-13 03:53:08.224113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.111 [2024-12-13 03:53:08.240549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.111 [2024-12-13 03:53:08.240573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.111 [2024-12-13 03:53:08.255576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.111 [2024-12-13 03:53:08.255600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.111 [2024-12-13 03:53:08.272718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.111 [2024-12-13 03:53:08.272741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.111 [2024-12-13 03:53:08.288639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.111 [2024-12-13 03:53:08.288663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.111 [2024-12-13 03:53:08.301495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.111 [2024-12-13 03:53:08.301519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.111 [2024-12-13 03:53:08.316342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.111 [2024-12-13 03:53:08.316372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.370 [2024-12-13 03:53:08.332151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.370 [2024-12-13 03:53:08.332175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.370 [2024-12-13 03:53:08.349246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.370 [2024-12-13 03:53:08.349271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.370 [2024-12-13 03:53:08.363424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.370 [2024-12-13 03:53:08.363448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.370 [2024-12-13 03:53:08.380696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.370 [2024-12-13 03:53:08.380719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.370 [2024-12-13 03:53:08.394189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.370 [2024-12-13 03:53:08.394211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.370 [2024-12-13 03:53:08.411520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.370 [2024-12-13 03:53:08.411544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.370 [2024-12-13 03:53:08.428216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.370 [2024-12-13 03:53:08.428240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.370 [2024-12-13 03:53:08.443819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.370 [2024-12-13 03:53:08.443843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.370 [2024-12-13 03:53:08.461441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.370 [2024-12-13 03:53:08.461465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.370 [2024-12-13 03:53:08.473730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.370 [2024-12-13 03:53:08.473753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.370 [2024-12-13 03:53:08.486700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.370 [2024-12-13 03:53:08.486723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.370 [2024-12-13 03:53:08.503937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.370 [2024-12-13 03:53:08.503963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.370 [2024-12-13 03:53:08.519729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.370 [2024-12-13 03:53:08.519753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.370 [2024-12-13 03:53:08.536675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.370 [2024-12-13 03:53:08.536701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.370 [2024-12-13 03:53:08.550083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.370 [2024-12-13 03:53:08.550108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.370 [2024-12-13 03:53:08.562964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.370 [2024-12-13 03:53:08.562987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.629 [2024-12-13 03:53:08.580423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.629 [2024-12-13 03:53:08.580449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.629 [2024-12-13 03:53:08.595587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.629 [2024-12-13 03:53:08.595612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.629 [2024-12-13 03:53:08.612975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.629 [2024-12-13 03:53:08.613000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.629 [2024-12-13 03:53:08.625547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.629 [2024-12-13 03:53:08.625571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.629 [2024-12-13 03:53:08.638810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.629 [2024-12-13 03:53:08.638834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.629 [2024-12-13 03:53:08.656502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.629 [2024-12-13 03:53:08.656526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.629 [2024-12-13 03:53:08.671258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.629 [2024-12-13 03:53:08.671283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.629 [2024-12-13 03:53:08.688624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.629 [2024-12-13 03:53:08.688648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.629 [2024-12-13 03:53:08.701614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.629 [2024-12-13 03:53:08.701638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.629 [2024-12-13 03:53:08.716381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.629 [2024-12-13 03:53:08.716405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.629 [2024-12-13 03:53:08.733001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.629 [2024-12-13 03:53:08.733026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.629 [2024-12-13 03:53:08.746112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.629 [2024-12-13 03:53:08.746136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.629 [2024-12-13 03:53:08.758806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.629 [2024-12-13 03:53:08.758830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.629 [2024-12-13 03:53:08.770246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.629 [2024-12-13 03:53:08.770269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.629 [2024-12-13 03:53:08.787768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.630 [2024-12-13 03:53:08.787792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.630 [2024-12-13 03:53:08.804434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.630 [2024-12-13 03:53:08.804459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.630 [2024-12-13 03:53:08.819868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.630 [2024-12-13 03:53:08.819893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.630 [2024-12-13 03:53:08.836577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.630 [2024-12-13 03:53:08.836602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.888 [2024-12-13 03:53:08.852856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.888 [2024-12-13 03:53:08.852882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.888 [2024-12-13 03:53:08.868948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.888 [2024-12-13 03:53:08.868973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.888 [2024-12-13 03:53:08.883667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.888 [2024-12-13 03:53:08.883691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.888 [2024-12-13 03:53:08.901234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.888 [2024-12-13 03:53:08.901259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.888 [2024-12-13 03:53:08.913496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.888 [2024-12-13 03:53:08.913521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.888 [2024-12-13 03:53:08.928342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.888 [2024-12-13 03:53:08.928366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.888 [2024-12-13 03:53:08.945162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.888 [2024-12-13 03:53:08.945187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.888 [2024-12-13 03:53:08.957444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.888 [2024-12-13 03:53:08.957468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.888 [2024-12-13 03:53:08.972915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.888 [2024-12-13 03:53:08.972946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.888 [2024-12-13 03:53:08.988639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.888 [2024-12-13 03:53:08.988663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.888 [2024-12-13 03:53:09.004511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.888 [2024-12-13 03:53:09.004535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.888 [2024-12-13 03:53:09.021427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.888 [2024-12-13 03:53:09.021451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.888 [2024-12-13 03:53:09.034956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.888 [2024-12-13 03:53:09.034980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.888 [2024-12-13 03:53:09.052263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.888 [2024-12-13 03:53:09.052287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.889 [2024-12-13 03:53:09.065224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.889 [2024-12-13 03:53:09.065247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:07.889 [2024-12-13 03:53:09.080442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:07.889 [2024-12-13 03:53:09.080470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.148 [2024-12-13 03:53:09.096749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.148 [2024-12-13 03:53:09.096774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.148 [2024-12-13 03:53:09.111553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.148 [2024-12-13 03:53:09.111577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.148 [2024-12-13 03:53:09.128643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.148 [2024-12-13 03:53:09.128667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.148 [2024-12-13 03:53:09.141810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.148 [2024-12-13 03:53:09.141833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.148 [2024-12-13 03:53:09.156072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.148 [2024-12-13 03:53:09.156097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.148 [2024-12-13 03:53:09.172882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.148 [2024-12-13 03:53:09.172905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.148 [2024-12-13 03:53:09.186048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.148 [2024-12-13 03:53:09.186074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.148 [2024-12-13 03:53:09.198854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.148 [2024-12-13 03:53:09.198880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.148 [2024-12-13 03:53:09.215583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.148 [2024-12-13 03:53:09.215606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.148 14306.80 IOPS, 111.77 MiB/s 00:42:08.148 Latency(us) 00:42:08.148 [2024-12-13T02:53:09.357Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:08.148 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:42:08.148 Nvme1n1 : 5.01 14316.37 111.85 0.00 0.00 8933.29 2481.01 15541.39 00:42:08.148 [2024-12-13T02:53:09.357Z] =================================================================================================================== 00:42:08.148 [2024-12-13T02:53:09.357Z] Total : 14316.37 111.85 0.00 0.00 8933.29 2481.01 15541.39 00:42:08.148 [2024-12-13 03:53:09.225985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.148 [2024-12-13 03:53:09.226007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.148 [2024-12-13 03:53:09.237966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.148 [2024-12-13 03:53:09.237987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.148 [2024-12-13 03:53:09.249984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.148 [2024-12-13 03:53:09.250005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.148 [2024-12-13 03:53:09.261975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.148 [2024-12-13 03:53:09.261995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.148 [2024-12-13 03:53:09.273980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.148 [2024-12-13 03:53:09.274010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.148 [2024-12-13 03:53:09.286050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.148 [2024-12-13 03:53:09.286086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.148 [2024-12-13 03:53:09.297986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.148 [2024-12-13 03:53:09.298011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.148 [2024-12-13 03:53:09.309960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.148 [2024-12-13 03:53:09.309980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.148 [2024-12-13 03:53:09.321975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.148 [2024-12-13 03:53:09.321994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.148 [2024-12-13 03:53:09.333961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.148 [2024-12-13 03:53:09.333980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.148 [2024-12-13 03:53:09.345976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.148 [2024-12-13 03:53:09.345995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.407 [2024-12-13 03:53:09.357981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.407 [2024-12-13 03:53:09.358002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.407 [2024-12-13 03:53:09.369964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.407 [2024-12-13 03:53:09.369985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.407 [2024-12-13 03:53:09.381987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.407 [2024-12-13 03:53:09.382007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.407 [2024-12-13 03:53:09.393975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.407 [2024-12-13 03:53:09.393994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.407 [2024-12-13 03:53:09.405959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.407 [2024-12-13 03:53:09.405978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.407 [2024-12-13 03:53:09.417972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.407 [2024-12-13 03:53:09.417991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.407 [2024-12-13 03:53:09.429973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.407 [2024-12-13 03:53:09.429992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.407 [2024-12-13 03:53:09.441975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.407 [2024-12-13 03:53:09.441994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.407 [2024-12-13 03:53:09.453972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.407 [2024-12-13 03:53:09.453991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.407 [2024-12-13 03:53:09.465965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.407 [2024-12-13 03:53:09.465984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.407 [2024-12-13 03:53:09.477975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.407 [2024-12-13 03:53:09.477994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.407 [2024-12-13 03:53:09.489975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.407 [2024-12-13 03:53:09.489994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.407 [2024-12-13 03:53:09.501957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.407 [2024-12-13 03:53:09.501975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.408 [2024-12-13 03:53:09.513969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.408 [2024-12-13 03:53:09.513987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.408 [2024-12-13 03:53:09.525958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.408 [2024-12-13 03:53:09.525981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.408 [2024-12-13 03:53:09.537975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.408 [2024-12-13 03:53:09.537994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.408 [2024-12-13 03:53:09.549988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.408 [2024-12-13 03:53:09.550007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.408 [2024-12-13 03:53:09.561962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.408 [2024-12-13 03:53:09.561981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.408 [2024-12-13 03:53:09.573990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.408 [2024-12-13 03:53:09.574009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.408 [2024-12-13 03:53:09.585965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.408 [2024-12-13 03:53:09.585983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.408 [2024-12-13 03:53:09.597990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.408 [2024-12-13 03:53:09.598010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.408 [2024-12-13 03:53:09.609977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.408 [2024-12-13 03:53:09.609996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.667 [2024-12-13 03:53:09.621960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.667 [2024-12-13 03:53:09.621980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.667 [2024-12-13 03:53:09.633982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.667 [2024-12-13 03:53:09.634004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.667 [2024-12-13 03:53:09.645994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.667 [2024-12-13 03:53:09.646018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.667 [2024-12-13 03:53:09.657961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.667 [2024-12-13 03:53:09.657980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.667 [2024-12-13 03:53:09.669973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.667 [2024-12-13 03:53:09.669992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.667 [2024-12-13 03:53:09.681975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.667 [2024-12-13 03:53:09.681994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.667 [2024-12-13 03:53:09.693960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.667 [2024-12-13 03:53:09.693979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.667 [2024-12-13 03:53:09.705974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.667 [2024-12-13 03:53:09.705992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.667 [2024-12-13 03:53:09.717975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.667 [2024-12-13 03:53:09.717994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.667 [2024-12-13 03:53:09.729973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.667 [2024-12-13 03:53:09.729992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.667 [2024-12-13 03:53:09.741974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.667 [2024-12-13 03:53:09.741993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.667 [2024-12-13 03:53:09.753961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.667 [2024-12-13 03:53:09.753980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.667 [2024-12-13 03:53:09.765975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.667 [2024-12-13 03:53:09.765995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.667 [2024-12-13 03:53:09.777973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.667 [2024-12-13 03:53:09.777991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.667 [2024-12-13 03:53:09.789977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.667 [2024-12-13 03:53:09.789996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.667 [2024-12-13 03:53:09.801971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.667 [2024-12-13 03:53:09.801990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.667 [2024-12-13 03:53:09.813957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.667 [2024-12-13 03:53:09.813977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.667 [2024-12-13 03:53:09.825979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.667 [2024-12-13 03:53:09.825998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.667 [2024-12-13 03:53:09.837973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.667 [2024-12-13 03:53:09.837991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.667 [2024-12-13 03:53:09.849963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.667 [2024-12-13 03:53:09.849982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.667 [2024-12-13 03:53:09.861996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.667 [2024-12-13 03:53:09.862015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.667 [2024-12-13 03:53:09.873978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.667 [2024-12-13 03:53:09.873997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.927 [2024-12-13 03:53:09.885957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.927 [2024-12-13 03:53:09.885977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.927 [2024-12-13 03:53:09.898001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.927 [2024-12-13 03:53:09.898021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.927 [2024-12-13 03:53:09.909974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.927 [2024-12-13 03:53:09.909994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.927 [2024-12-13 03:53:09.921978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.927 [2024-12-13 03:53:09.921997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.927 [2024-12-13 03:53:09.933974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.927 [2024-12-13 03:53:09.933993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.927 [2024-12-13 03:53:09.945963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.927 [2024-12-13 03:53:09.945983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.927 [2024-12-13 03:53:09.957979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.927 [2024-12-13 03:53:09.957999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.927 [2024-12-13 03:53:09.969979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.927 [2024-12-13 03:53:09.970000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.927 [2024-12-13 03:53:09.981966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.927 [2024-12-13 03:53:09.981986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.927 [2024-12-13 03:53:09.993974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.927 [2024-12-13 03:53:09.993993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.927 [2024-12-13 03:53:10.006073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.927 [2024-12-13 03:53:10.006111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.927 [2024-12-13 03:53:10.018044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.927 [2024-12-13 03:53:10.018073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.927 [2024-12-13 03:53:10.029991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.927 [2024-12-13 03:53:10.030015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.927 [2024-12-13 03:53:10.041968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.927 [2024-12-13 03:53:10.041994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.927 [2024-12-13 03:53:10.053984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.927 [2024-12-13 03:53:10.054005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.927 [2024-12-13 03:53:10.065980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.927 [2024-12-13 03:53:10.066001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.927 [2024-12-13 03:53:10.077961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.927 [2024-12-13 03:53:10.077982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.927 [2024-12-13 03:53:10.089977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.927 [2024-12-13 03:53:10.089997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.927 [2024-12-13 03:53:10.101969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.927 [2024-12-13 03:53:10.101989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.927 [2024-12-13 03:53:10.113979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.927 [2024-12-13 03:53:10.113998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.927 [2024-12-13 03:53:10.125974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:08.927 [2024-12-13 03:53:10.125992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:08.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2978036) - No such process 00:42:08.927 03:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2978036 00:42:08.927 03:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:08.927 03:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:08.927 03:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:09.187 03:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:09.187 03:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:42:09.187 03:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:09.187 03:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:09.187 delay0 00:42:09.187 03:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:09.187 03:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:42:09.187 03:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:09.187 03:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:09.187 03:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:09.187 03:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:42:09.187 [2024-12-13 03:53:10.293922] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:42:17.307 Initializing NVMe Controllers 00:42:17.308 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:42:17.308 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:42:17.308 Initialization complete. Launching workers. 00:42:17.308 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 260, failed: 18249 00:42:17.308 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 18416, failed to submit 93 00:42:17.308 success 18313, unsuccessful 103, failed 0 00:42:17.308 03:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:42:17.308 03:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:42:17.308 03:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:17.308 03:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:42:17.308 03:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:17.308 03:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:42:17.308 03:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:17.308 03:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:17.308 rmmod nvme_tcp 00:42:17.308 rmmod nvme_fabrics 00:42:17.308 rmmod nvme_keyring 00:42:17.308 03:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:17.308 03:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:42:17.308 03:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:42:17.308 03:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2976017 ']' 00:42:17.308 03:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2976017 00:42:17.308 03:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2976017 ']' 00:42:17.308 03:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2976017 00:42:17.308 03:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:42:17.308 03:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:17.308 03:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2976017 00:42:17.308 03:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:17.308 03:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:17.308 03:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2976017' 00:42:17.308 killing process with pid 2976017 00:42:17.308 03:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2976017 00:42:17.308 03:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2976017 00:42:17.567 03:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:17.567 03:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:17.567 03:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:17.567 03:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:42:17.567 03:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:42:17.567 03:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:17.567 03:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:42:17.567 03:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:17.567 03:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:17.567 03:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:17.567 03:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:17.567 03:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:20.104 00:42:20.104 real 0m36.027s 00:42:20.104 user 0m47.830s 00:42:20.104 sys 0m13.094s 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:20.104 ************************************ 00:42:20.104 END TEST nvmf_zcopy 00:42:20.104 ************************************ 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:20.104 ************************************ 00:42:20.104 START TEST nvmf_nmic 00:42:20.104 ************************************ 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:42:20.104 * Looking for test storage... 00:42:20.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:20.104 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:20.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:20.104 --rc genhtml_branch_coverage=1 00:42:20.104 --rc genhtml_function_coverage=1 00:42:20.104 --rc genhtml_legend=1 00:42:20.104 --rc geninfo_all_blocks=1 00:42:20.105 --rc geninfo_unexecuted_blocks=1 00:42:20.105 00:42:20.105 ' 00:42:20.105 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:20.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:20.105 --rc genhtml_branch_coverage=1 00:42:20.105 --rc genhtml_function_coverage=1 00:42:20.105 --rc genhtml_legend=1 00:42:20.105 --rc geninfo_all_blocks=1 00:42:20.105 --rc geninfo_unexecuted_blocks=1 00:42:20.105 00:42:20.105 ' 00:42:20.105 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:20.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:20.105 --rc genhtml_branch_coverage=1 00:42:20.105 --rc genhtml_function_coverage=1 00:42:20.105 --rc genhtml_legend=1 00:42:20.105 --rc geninfo_all_blocks=1 00:42:20.105 --rc geninfo_unexecuted_blocks=1 00:42:20.105 00:42:20.105 ' 00:42:20.105 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:20.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:20.105 --rc genhtml_branch_coverage=1 00:42:20.105 --rc genhtml_function_coverage=1 00:42:20.105 --rc genhtml_legend=1 00:42:20.105 --rc geninfo_all_blocks=1 00:42:20.105 --rc geninfo_unexecuted_blocks=1 00:42:20.105 00:42:20.105 ' 00:42:20.105 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:20.105 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:42:20.105 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:20.105 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:20.105 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:20.105 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:20.105 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:20.105 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:20.105 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:20.105 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:20.105 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:20.105 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:20.105 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:42:20.105 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:42:20.105 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:20.105 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:20.105 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:20.105 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:20.105 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:20.105 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:42:20.105 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:20.105 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:20.105 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:20.105 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:20.105 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:20.105 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:20.105 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:42:20.105 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:20.105 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:42:20.105 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:20.105 03:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:20.105 03:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:20.105 03:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:20.105 03:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:20.105 03:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:20.105 03:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:20.105 03:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:20.105 03:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:20.105 03:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:20.105 03:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:20.105 03:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:20.105 03:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:42:20.105 03:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:20.105 03:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:20.105 03:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:20.105 03:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:20.105 03:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:20.105 03:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:20.105 03:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:20.105 03:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:20.105 03:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:20.105 03:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:20.105 03:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:42:20.105 03:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:25.425 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:25.425 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:25.425 Found net devices under 0000:af:00.0: cvl_0_0 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:25.425 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:25.426 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:25.426 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:25.426 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:25.426 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:25.426 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:25.426 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:25.426 Found net devices under 0000:af:00.1: cvl_0_1 00:42:25.426 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:25.426 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:25.426 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:42:25.426 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:25.426 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:25.426 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:25.426 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:25.426 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:25.426 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:25.426 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:25.426 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:25.426 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:25.426 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:25.426 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:25.426 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:25.426 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:25.426 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:25.426 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:25.426 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:25.426 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:25.426 03:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:25.426 03:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:25.426 03:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:25.426 03:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:25.426 03:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:25.426 03:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:25.426 03:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:25.426 03:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:25.426 03:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:25.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:25.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:42:25.426 00:42:25.426 --- 10.0.0.2 ping statistics --- 00:42:25.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:25.426 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:42:25.426 03:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:25.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:25.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:42:25.426 00:42:25.426 --- 10.0.0.1 ping statistics --- 00:42:25.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:25.426 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:42:25.426 03:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:25.426 03:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:42:25.426 03:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:25.426 03:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:25.426 03:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:25.426 03:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:25.426 03:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:25.426 03:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:25.426 03:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:25.426 03:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:42:25.426 03:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:25.426 03:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:25.426 03:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:25.426 03:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2983712 00:42:25.426 03:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2983712 00:42:25.426 03:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2983712 ']' 00:42:25.426 03:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:25.426 03:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:25.426 03:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:42:25.426 03:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:25.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:25.426 03:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:25.426 03:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:25.426 [2024-12-13 03:53:26.298585] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:25.426 [2024-12-13 03:53:26.300661] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:42:25.426 [2024-12-13 03:53:26.300730] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:25.426 [2024-12-13 03:53:26.416258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:25.426 [2024-12-13 03:53:26.527342] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:25.426 [2024-12-13 03:53:26.527384] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:25.427 [2024-12-13 03:53:26.527395] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:25.427 [2024-12-13 03:53:26.527420] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:25.427 [2024-12-13 03:53:26.527430] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:25.427 [2024-12-13 03:53:26.529793] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:42:25.427 [2024-12-13 03:53:26.529868] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:42:25.427 [2024-12-13 03:53:26.529970] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:42:25.427 [2024-12-13 03:53:26.529982] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:42:25.687 [2024-12-13 03:53:26.850817] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:25.687 [2024-12-13 03:53:26.851696] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:25.687 [2024-12-13 03:53:26.852911] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:25.687 [2024-12-13 03:53:26.853641] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:25.687 [2024-12-13 03:53:26.853893] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:42:25.947 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:25.947 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:42:25.947 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:25.947 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:25.947 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:25.947 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:25.947 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:25.947 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:25.947 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:25.947 [2024-12-13 03:53:27.147057] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:26.208 Malloc0 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:26.208 [2024-12-13 03:53:27.250972] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:42:26.208 test case1: single bdev can't be used in multiple subsystems 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:26.208 [2024-12-13 03:53:27.274605] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:42:26.208 [2024-12-13 03:53:27.274638] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:42:26.208 [2024-12-13 03:53:27.274650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.208 request: 00:42:26.208 { 00:42:26.208 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:42:26.208 "namespace": { 00:42:26.208 "bdev_name": "Malloc0", 00:42:26.208 "no_auto_visible": false, 00:42:26.208 "hide_metadata": false 00:42:26.208 }, 00:42:26.208 "method": "nvmf_subsystem_add_ns", 00:42:26.208 "req_id": 1 00:42:26.208 } 00:42:26.208 Got JSON-RPC error response 00:42:26.208 response: 00:42:26.208 { 00:42:26.208 "code": -32602, 00:42:26.208 "message": "Invalid parameters" 00:42:26.208 } 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:42:26.208 Adding namespace failed - expected result. 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:42:26.208 test case2: host connect to nvmf target in multiple paths 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:26.208 [2024-12-13 03:53:27.286719] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.208 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:42:26.468 03:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:42:27.038 03:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:42:27.038 03:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:42:27.038 03:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:42:27.038 03:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:42:27.038 03:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:42:28.947 03:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:42:28.948 03:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:42:28.948 03:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:42:28.948 03:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:42:28.948 03:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:42:28.948 03:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:42:28.948 03:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:42:28.948 [global] 00:42:28.948 thread=1 00:42:28.948 invalidate=1 00:42:28.948 rw=write 00:42:28.948 time_based=1 00:42:28.948 runtime=1 00:42:28.948 ioengine=libaio 00:42:28.948 direct=1 00:42:28.948 bs=4096 00:42:28.948 iodepth=1 00:42:28.948 norandommap=0 00:42:28.948 numjobs=1 00:42:28.948 00:42:28.948 verify_dump=1 00:42:28.948 verify_backlog=512 00:42:28.948 verify_state_save=0 00:42:28.948 do_verify=1 00:42:28.948 verify=crc32c-intel 00:42:28.948 [job0] 00:42:28.948 filename=/dev/nvme0n1 00:42:28.948 Could not set queue depth (nvme0n1) 00:42:29.207 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:29.207 fio-3.35 00:42:29.207 Starting 1 thread 00:42:30.589 00:42:30.589 job0: (groupid=0, jobs=1): err= 0: pid=2984406: Fri Dec 13 03:53:31 2024 00:42:30.589 read: IOPS=22, BW=88.7KiB/s (90.8kB/s)(92.0KiB/1037msec) 00:42:30.589 slat (nsec): min=10245, max=26199, avg=21784.13, stdev=2673.49 00:42:30.589 clat (usec): min=40907, max=42302, avg=41032.38, stdev=280.22 00:42:30.589 lat (usec): min=40929, max=42312, avg=41054.16, stdev=277.81 00:42:30.589 clat percentiles (usec): 00:42:30.589 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:42:30.589 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:30.589 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:30.589 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:42:30.589 | 99.99th=[42206] 00:42:30.589 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:42:30.589 slat (nsec): min=9866, max=46424, avg=11337.47, stdev=2382.08 00:42:30.589 clat (usec): min=152, max=306, avg=165.62, stdev= 8.68 00:42:30.589 lat (usec): min=163, max=352, avg=176.95, stdev=10.09 00:42:30.589 clat percentiles (usec): 00:42:30.589 | 1.00th=[ 155], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 161], 00:42:30.589 | 30.00th=[ 163], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 167], 00:42:30.589 | 70.00th=[ 167], 80.00th=[ 169], 90.00th=[ 174], 95.00th=[ 178], 00:42:30.589 | 99.00th=[ 184], 99.50th=[ 192], 99.90th=[ 306], 99.95th=[ 306], 00:42:30.589 | 99.99th=[ 306] 00:42:30.589 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:42:30.589 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:42:30.589 lat (usec) : 250=95.51%, 500=0.19% 00:42:30.589 lat (msec) : 50=4.30% 00:42:30.589 cpu : usr=0.68%, sys=0.58%, ctx=535, majf=0, minf=1 00:42:30.589 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:30.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.590 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.590 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:30.590 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:30.590 00:42:30.590 Run status group 0 (all jobs): 00:42:30.590 READ: bw=88.7KiB/s (90.8kB/s), 88.7KiB/s-88.7KiB/s (90.8kB/s-90.8kB/s), io=92.0KiB (94.2kB), run=1037-1037msec 00:42:30.590 WRITE: bw=1975KiB/s (2022kB/s), 1975KiB/s-1975KiB/s (2022kB/s-2022kB/s), io=2048KiB (2097kB), run=1037-1037msec 00:42:30.590 00:42:30.590 Disk stats (read/write): 00:42:30.590 nvme0n1: ios=69/512, merge=0/0, ticks=801/77, in_queue=878, util=91.38% 00:42:30.590 03:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:42:31.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:42:31.159 03:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:42:31.159 03:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:42:31.159 03:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:42:31.159 03:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:31.159 03:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:42:31.159 03:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:31.159 03:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:42:31.159 03:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:42:31.159 03:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:42:31.159 03:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:31.159 03:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:42:31.160 03:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:31.160 03:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:42:31.160 03:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:31.160 03:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:31.160 rmmod nvme_tcp 00:42:31.160 rmmod nvme_fabrics 00:42:31.160 rmmod nvme_keyring 00:42:31.160 03:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:31.160 03:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:42:31.160 03:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:42:31.160 03:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2983712 ']' 00:42:31.160 03:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2983712 00:42:31.160 03:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2983712 ']' 00:42:31.160 03:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2983712 00:42:31.160 03:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:42:31.160 03:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:31.160 03:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2983712 00:42:31.160 03:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:31.160 03:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:31.160 03:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2983712' 00:42:31.160 killing process with pid 2983712 00:42:31.160 03:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2983712 00:42:31.160 03:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2983712 00:42:32.611 03:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:32.611 03:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:32.611 03:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:32.611 03:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:42:32.611 03:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:42:32.611 03:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:32.611 03:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:42:32.611 03:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:32.611 03:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:32.611 03:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:32.611 03:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:32.611 03:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:34.593 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:34.593 00:42:34.593 real 0m14.761s 00:42:34.593 user 0m27.870s 00:42:34.593 sys 0m5.728s 00:42:34.593 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:34.593 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:34.593 ************************************ 00:42:34.593 END TEST nvmf_nmic 00:42:34.593 ************************************ 00:42:34.593 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:42:34.593 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:34.593 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:34.593 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:34.593 ************************************ 00:42:34.593 START TEST nvmf_fio_target 00:42:34.593 ************************************ 00:42:34.593 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:42:34.593 * Looking for test storage... 00:42:34.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:34.593 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:34.594 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:42:34.594 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:34.594 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:34.594 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:34.594 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:34.594 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:34.594 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:42:34.594 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:42:34.594 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:42:34.594 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:42:34.594 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:42:34.594 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:42:34.594 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:42:34.594 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:34.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:34.855 --rc genhtml_branch_coverage=1 00:42:34.855 --rc genhtml_function_coverage=1 00:42:34.855 --rc genhtml_legend=1 00:42:34.855 --rc geninfo_all_blocks=1 00:42:34.855 --rc geninfo_unexecuted_blocks=1 00:42:34.855 00:42:34.855 ' 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:34.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:34.855 --rc genhtml_branch_coverage=1 00:42:34.855 --rc genhtml_function_coverage=1 00:42:34.855 --rc genhtml_legend=1 00:42:34.855 --rc geninfo_all_blocks=1 00:42:34.855 --rc geninfo_unexecuted_blocks=1 00:42:34.855 00:42:34.855 ' 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:34.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:34.855 --rc genhtml_branch_coverage=1 00:42:34.855 --rc genhtml_function_coverage=1 00:42:34.855 --rc genhtml_legend=1 00:42:34.855 --rc geninfo_all_blocks=1 00:42:34.855 --rc geninfo_unexecuted_blocks=1 00:42:34.855 00:42:34.855 ' 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:34.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:34.855 --rc genhtml_branch_coverage=1 00:42:34.855 --rc genhtml_function_coverage=1 00:42:34.855 --rc genhtml_legend=1 00:42:34.855 --rc geninfo_all_blocks=1 00:42:34.855 --rc geninfo_unexecuted_blocks=1 00:42:34.855 00:42:34.855 ' 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:34.855 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:34.856 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:34.856 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:42:34.856 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:34.856 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:42:34.856 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:34.856 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:34.856 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:34.856 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:34.856 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:34.856 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:34.856 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:34.856 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:34.856 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:34.856 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:34.856 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:34.856 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:34.856 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:34.856 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:42:34.856 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:34.856 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:34.856 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:34.856 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:34.856 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:34.856 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:34.856 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:34.856 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:34.856 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:34.856 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:34.856 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:42:34.856 03:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:40.133 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:40.133 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:40.134 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:40.134 Found net devices under 0000:af:00.0: cvl_0_0 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:40.134 Found net devices under 0000:af:00.1: cvl_0_1 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:40.134 03:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:40.134 03:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:40.134 03:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:40.134 03:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:40.134 03:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:40.134 03:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:40.134 03:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:40.134 03:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:40.134 03:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:40.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:40.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:42:40.134 00:42:40.134 --- 10.0.0.2 ping statistics --- 00:42:40.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:40.134 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:42:40.134 03:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:40.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:40.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:42:40.134 00:42:40.134 --- 10.0.0.1 ping statistics --- 00:42:40.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:40.134 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:42:40.134 03:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:40.134 03:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:42:40.134 03:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:40.134 03:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:40.134 03:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:40.134 03:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:40.134 03:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:40.134 03:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:40.134 03:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:40.134 03:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:42:40.134 03:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:40.134 03:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:40.134 03:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:40.134 03:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2988254 00:42:40.134 03:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2988254 00:42:40.134 03:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:42:40.134 03:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2988254 ']' 00:42:40.134 03:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:40.134 03:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:40.134 03:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:40.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:40.134 03:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:40.134 03:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:40.134 [2024-12-13 03:53:41.307820] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:40.134 [2024-12-13 03:53:41.309782] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:42:40.134 [2024-12-13 03:53:41.309847] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:40.394 [2024-12-13 03:53:41.428208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:40.394 [2024-12-13 03:53:41.534521] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:40.394 [2024-12-13 03:53:41.534563] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:40.394 [2024-12-13 03:53:41.534575] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:40.394 [2024-12-13 03:53:41.534583] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:40.394 [2024-12-13 03:53:41.534592] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:40.394 [2024-12-13 03:53:41.536966] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:42:40.394 [2024-12-13 03:53:41.537014] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:42:40.394 [2024-12-13 03:53:41.537077] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:42:40.394 [2024-12-13 03:53:41.537099] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:42:40.653 [2024-12-13 03:53:41.852652] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:40.653 [2024-12-13 03:53:41.853465] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:40.653 [2024-12-13 03:53:41.854478] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:40.653 [2024-12-13 03:53:41.855137] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:40.653 [2024-12-13 03:53:41.855356] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:42:40.912 03:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:40.912 03:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:42:40.912 03:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:40.912 03:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:40.912 03:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:41.172 03:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:41.172 03:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:42:41.172 [2024-12-13 03:53:42.322246] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:41.172 03:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:41.430 03:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:42:41.431 03:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:41.689 03:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:42:41.689 03:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:41.948 03:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:42:41.948 03:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:42.207 03:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:42:42.207 03:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:42:42.466 03:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:42.725 03:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:42:42.725 03:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:42.984 03:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:42:42.984 03:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:43.243 03:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:42:43.243 03:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:42:43.502 03:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:42:43.502 03:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:42:43.502 03:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:43.761 03:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:42:43.761 03:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:42:44.023 03:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:44.284 [2024-12-13 03:53:45.242039] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:44.284 03:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:42:44.284 03:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:42:44.542 03:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:42:45.109 03:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:42:45.109 03:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:42:45.109 03:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:42:45.109 03:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:42:45.109 03:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:42:45.109 03:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:42:47.014 03:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:42:47.015 03:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:42:47.015 03:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:42:47.015 03:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:42:47.015 03:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:42:47.015 03:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:42:47.015 03:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:42:47.015 [global] 00:42:47.015 thread=1 00:42:47.015 invalidate=1 00:42:47.015 rw=write 00:42:47.015 time_based=1 00:42:47.015 runtime=1 00:42:47.015 ioengine=libaio 00:42:47.015 direct=1 00:42:47.015 bs=4096 00:42:47.015 iodepth=1 00:42:47.015 norandommap=0 00:42:47.015 numjobs=1 00:42:47.015 00:42:47.015 verify_dump=1 00:42:47.015 verify_backlog=512 00:42:47.015 verify_state_save=0 00:42:47.015 do_verify=1 00:42:47.015 verify=crc32c-intel 00:42:47.015 [job0] 00:42:47.015 filename=/dev/nvme0n1 00:42:47.015 [job1] 00:42:47.015 filename=/dev/nvme0n2 00:42:47.015 [job2] 00:42:47.015 filename=/dev/nvme0n3 00:42:47.015 [job3] 00:42:47.015 filename=/dev/nvme0n4 00:42:47.015 Could not set queue depth (nvme0n1) 00:42:47.015 Could not set queue depth (nvme0n2) 00:42:47.015 Could not set queue depth (nvme0n3) 00:42:47.015 Could not set queue depth (nvme0n4) 00:42:47.273 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:47.273 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:47.273 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:47.273 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:47.273 fio-3.35 00:42:47.273 Starting 4 threads 00:42:48.651 00:42:48.651 job0: (groupid=0, jobs=1): err= 0: pid=2989564: Fri Dec 13 03:53:49 2024 00:42:48.651 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:42:48.651 slat (nsec): min=6824, max=49601, avg=8168.97, stdev=2005.90 00:42:48.651 clat (usec): min=201, max=504, avg=257.46, stdev=44.56 00:42:48.651 lat (usec): min=209, max=512, avg=265.63, stdev=44.61 00:42:48.651 clat percentiles (usec): 00:42:48.651 | 1.00th=[ 206], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 237], 00:42:48.651 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 249], 00:42:48.651 | 70.00th=[ 253], 80.00th=[ 265], 90.00th=[ 302], 95.00th=[ 334], 00:42:48.651 | 99.00th=[ 478], 99.50th=[ 490], 99.90th=[ 498], 99.95th=[ 502], 00:42:48.651 | 99.99th=[ 506] 00:42:48.651 write: IOPS=2346, BW=9387KiB/s (9612kB/s)(9396KiB/1001msec); 0 zone resets 00:42:48.651 slat (nsec): min=9782, max=44097, avg=11962.39, stdev=3503.39 00:42:48.651 clat (usec): min=137, max=341, avg=176.51, stdev=18.93 00:42:48.651 lat (usec): min=148, max=379, avg=188.48, stdev=20.03 00:42:48.651 clat percentiles (usec): 00:42:48.651 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 159], 00:42:48.651 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 180], 00:42:48.651 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 208], 00:42:48.651 | 99.00th=[ 225], 99.50th=[ 233], 99.90th=[ 265], 99.95th=[ 269], 00:42:48.651 | 99.99th=[ 343] 00:42:48.651 bw ( KiB/s): min= 9512, max= 9512, per=45.36%, avg=9512.00, stdev= 0.00, samples=1 00:42:48.651 iops : min= 2378, max= 2378, avg=2378.00, stdev= 0.00, samples=1 00:42:48.651 lat (usec) : 250=83.83%, 500=16.12%, 750=0.05% 00:42:48.651 cpu : usr=3.40%, sys=7.30%, ctx=4397, majf=0, minf=1 00:42:48.651 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:48.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.652 issued rwts: total=2048,2349,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:48.652 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:48.652 job1: (groupid=0, jobs=1): err= 0: pid=2989565: Fri Dec 13 03:53:49 2024 00:42:48.652 read: IOPS=21, BW=85.4KiB/s (87.5kB/s)(88.0KiB/1030msec) 00:42:48.652 slat (nsec): min=9640, max=23581, avg=19428.68, stdev=4934.87 00:42:48.652 clat (usec): min=40881, max=41128, avg=40983.92, stdev=61.19 00:42:48.652 lat (usec): min=40903, max=41139, avg=41003.34, stdev=59.55 00:42:48.652 clat percentiles (usec): 00:42:48.652 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:42:48.652 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:48.652 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:48.652 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:42:48.652 | 99.99th=[41157] 00:42:48.652 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:42:48.652 slat (nsec): min=10049, max=37060, avg=11553.23, stdev=1831.75 00:42:48.652 clat (usec): min=183, max=499, avg=235.54, stdev=24.26 00:42:48.652 lat (usec): min=196, max=536, avg=247.09, stdev=24.87 00:42:48.652 clat percentiles (usec): 00:42:48.652 | 1.00th=[ 190], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 217], 00:42:48.652 | 30.00th=[ 225], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 241], 00:42:48.652 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 260], 95.00th=[ 269], 00:42:48.652 | 99.00th=[ 310], 99.50th=[ 330], 99.90th=[ 498], 99.95th=[ 498], 00:42:48.652 | 99.99th=[ 498] 00:42:48.652 bw ( KiB/s): min= 4096, max= 4096, per=19.53%, avg=4096.00, stdev= 0.00, samples=1 00:42:48.652 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:42:48.652 lat (usec) : 250=76.40%, 500=19.48% 00:42:48.652 lat (msec) : 50=4.12% 00:42:48.652 cpu : usr=0.78%, sys=0.49%, ctx=534, majf=0, minf=1 00:42:48.652 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:48.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.652 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:48.652 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:48.652 job2: (groupid=0, jobs=1): err= 0: pid=2989566: Fri Dec 13 03:53:49 2024 00:42:48.652 read: IOPS=21, BW=85.1KiB/s (87.1kB/s)(88.0KiB/1034msec) 00:42:48.652 slat (nsec): min=9810, max=27417, avg=20277.41, stdev=6491.50 00:42:48.652 clat (usec): min=40718, max=42066, avg=41129.10, stdev=355.15 00:42:48.652 lat (usec): min=40744, max=42094, avg=41149.37, stdev=355.24 00:42:48.652 clat percentiles (usec): 00:42:48.652 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:42:48.652 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:48.652 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:42:48.652 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:42:48.652 | 99.99th=[42206] 00:42:48.652 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:42:48.652 slat (nsec): min=11255, max=39316, avg=12882.93, stdev=1911.70 00:42:48.652 clat (usec): min=171, max=351, avg=233.79, stdev=21.34 00:42:48.652 lat (usec): min=183, max=391, avg=246.68, stdev=21.81 00:42:48.652 clat percentiles (usec): 00:42:48.652 | 1.00th=[ 194], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 217], 00:42:48.652 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 239], 00:42:48.652 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 258], 95.00th=[ 269], 00:42:48.652 | 99.00th=[ 310], 99.50th=[ 318], 99.90th=[ 351], 99.95th=[ 351], 00:42:48.652 | 99.99th=[ 351] 00:42:48.652 bw ( KiB/s): min= 4096, max= 4096, per=19.53%, avg=4096.00, stdev= 0.00, samples=1 00:42:48.652 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:42:48.652 lat (usec) : 250=78.65%, 500=17.23% 00:42:48.652 lat (msec) : 50=4.12% 00:42:48.652 cpu : usr=0.68%, sys=0.68%, ctx=535, majf=0, minf=1 00:42:48.652 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:48.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.652 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:48.652 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:48.652 job3: (groupid=0, jobs=1): err= 0: pid=2989567: Fri Dec 13 03:53:49 2024 00:42:48.652 read: IOPS=2026, BW=8108KiB/s (8302kB/s)(8116KiB/1001msec) 00:42:48.652 slat (nsec): min=6727, max=42430, avg=7976.09, stdev=1785.48 00:42:48.652 clat (usec): min=204, max=525, avg=278.92, stdev=55.75 00:42:48.652 lat (usec): min=211, max=533, avg=286.89, stdev=55.94 00:42:48.652 clat percentiles (usec): 00:42:48.652 | 1.00th=[ 217], 5.00th=[ 231], 10.00th=[ 239], 20.00th=[ 243], 00:42:48.652 | 30.00th=[ 253], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:42:48.652 | 70.00th=[ 277], 80.00th=[ 306], 90.00th=[ 326], 95.00th=[ 461], 00:42:48.652 | 99.00th=[ 494], 99.50th=[ 502], 99.90th=[ 523], 99.95th=[ 523], 00:42:48.652 | 99.99th=[ 529] 00:42:48.652 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:42:48.652 slat (nsec): min=9306, max=41792, avg=10790.57, stdev=1703.19 00:42:48.652 clat (usec): min=146, max=404, avg=187.98, stdev=20.56 00:42:48.652 lat (usec): min=158, max=446, avg=198.77, stdev=20.83 00:42:48.652 clat percentiles (usec): 00:42:48.652 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 176], 00:42:48.652 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 190], 00:42:48.652 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 210], 95.00th=[ 231], 00:42:48.652 | 99.00th=[ 255], 99.50th=[ 262], 99.90th=[ 265], 99.95th=[ 269], 00:42:48.652 | 99.99th=[ 404] 00:42:48.652 bw ( KiB/s): min= 8624, max= 8624, per=41.12%, avg=8624.00, stdev= 0.00, samples=1 00:42:48.652 iops : min= 2156, max= 2156, avg=2156.00, stdev= 0.00, samples=1 00:42:48.652 lat (usec) : 250=62.50%, 500=37.16%, 750=0.34% 00:42:48.652 cpu : usr=2.30%, sys=5.20%, ctx=4077, majf=0, minf=1 00:42:48.652 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:48.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.652 issued rwts: total=2029,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:48.652 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:48.652 00:42:48.652 Run status group 0 (all jobs): 00:42:48.652 READ: bw=15.6MiB/s (16.3MB/s), 85.1KiB/s-8184KiB/s (87.1kB/s-8380kB/s), io=16.1MiB (16.9MB), run=1001-1034msec 00:42:48.652 WRITE: bw=20.5MiB/s (21.5MB/s), 1981KiB/s-9387KiB/s (2028kB/s-9612kB/s), io=21.2MiB (22.2MB), run=1001-1034msec 00:42:48.652 00:42:48.652 Disk stats (read/write): 00:42:48.652 nvme0n1: ios=1677/2048, merge=0/0, ticks=429/339, in_queue=768, util=84.77% 00:42:48.652 nvme0n2: ios=17/512, merge=0/0, ticks=697/111, in_queue=808, util=85.14% 00:42:48.652 nvme0n3: ios=43/512, merge=0/0, ticks=1645/118, in_queue=1763, util=100.00% 00:42:48.652 nvme0n4: ios=1536/1871, merge=0/0, ticks=423/341, in_queue=764, util=89.45% 00:42:48.652 03:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:42:48.652 [global] 00:42:48.652 thread=1 00:42:48.652 invalidate=1 00:42:48.652 rw=randwrite 00:42:48.652 time_based=1 00:42:48.652 runtime=1 00:42:48.652 ioengine=libaio 00:42:48.652 direct=1 00:42:48.652 bs=4096 00:42:48.652 iodepth=1 00:42:48.652 norandommap=0 00:42:48.652 numjobs=1 00:42:48.652 00:42:48.652 verify_dump=1 00:42:48.652 verify_backlog=512 00:42:48.652 verify_state_save=0 00:42:48.652 do_verify=1 00:42:48.652 verify=crc32c-intel 00:42:48.652 [job0] 00:42:48.652 filename=/dev/nvme0n1 00:42:48.652 [job1] 00:42:48.652 filename=/dev/nvme0n2 00:42:48.652 [job2] 00:42:48.652 filename=/dev/nvme0n3 00:42:48.652 [job3] 00:42:48.652 filename=/dev/nvme0n4 00:42:48.652 Could not set queue depth (nvme0n1) 00:42:48.652 Could not set queue depth (nvme0n2) 00:42:48.652 Could not set queue depth (nvme0n3) 00:42:48.652 Could not set queue depth (nvme0n4) 00:42:48.911 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:48.911 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:48.912 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:48.912 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:48.912 fio-3.35 00:42:48.912 Starting 4 threads 00:42:50.322 00:42:50.322 job0: (groupid=0, jobs=1): err= 0: pid=2989929: Fri Dec 13 03:53:51 2024 00:42:50.322 read: IOPS=1526, BW=6106KiB/s (6252kB/s)(6112KiB/1001msec) 00:42:50.322 slat (nsec): min=6488, max=27252, avg=7476.27, stdev=1405.99 00:42:50.322 clat (usec): min=216, max=41978, avg=417.33, stdev=2557.19 00:42:50.322 lat (usec): min=224, max=42002, avg=424.81, stdev=2557.82 00:42:50.322 clat percentiles (usec): 00:42:50.322 | 1.00th=[ 229], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 241], 00:42:50.322 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 253], 00:42:50.322 | 70.00th=[ 258], 80.00th=[ 262], 90.00th=[ 269], 95.00th=[ 297], 00:42:50.322 | 99.00th=[ 433], 99.50th=[ 474], 99.90th=[41157], 99.95th=[42206], 00:42:50.322 | 99.99th=[42206] 00:42:50.322 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:42:50.322 slat (usec): min=9, max=35755, avg=34.43, stdev=912.03 00:42:50.322 clat (usec): min=138, max=559, avg=188.90, stdev=33.55 00:42:50.322 lat (usec): min=148, max=36029, avg=223.33, stdev=914.86 00:42:50.322 clat percentiles (usec): 00:42:50.322 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 161], 20.00th=[ 167], 00:42:50.322 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 186], 00:42:50.322 | 70.00th=[ 200], 80.00th=[ 217], 90.00th=[ 231], 95.00th=[ 243], 00:42:50.322 | 99.00th=[ 260], 99.50th=[ 322], 99.90th=[ 510], 99.95th=[ 562], 00:42:50.322 | 99.99th=[ 562] 00:42:50.322 bw ( KiB/s): min= 8192, max= 8192, per=40.48%, avg=8192.00, stdev= 0.00, samples=1 00:42:50.322 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:42:50.322 lat (usec) : 250=73.63%, 500=26.01%, 750=0.13% 00:42:50.322 lat (msec) : 2=0.03%, 50=0.20% 00:42:50.322 cpu : usr=1.60%, sys=3.00%, ctx=3066, majf=0, minf=1 00:42:50.322 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:50.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:50.322 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:50.322 issued rwts: total=1528,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:50.322 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:50.322 job1: (groupid=0, jobs=1): err= 0: pid=2989932: Fri Dec 13 03:53:51 2024 00:42:50.322 read: IOPS=21, BW=87.0KiB/s (89.0kB/s)(88.0KiB/1012msec) 00:42:50.322 slat (nsec): min=11475, max=26819, avg=24638.59, stdev=3019.94 00:42:50.322 clat (usec): min=40848, max=41143, avg=40968.88, stdev=67.98 00:42:50.322 lat (usec): min=40873, max=41154, avg=40993.52, stdev=66.32 00:42:50.322 clat percentiles (usec): 00:42:50.322 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:42:50.322 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:50.322 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:50.322 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:42:50.322 | 99.99th=[41157] 00:42:50.322 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:42:50.322 slat (nsec): min=10451, max=48025, avg=12705.87, stdev=3021.60 00:42:50.322 clat (usec): min=172, max=399, avg=197.56, stdev=16.38 00:42:50.322 lat (usec): min=187, max=441, avg=210.27, stdev=17.79 00:42:50.322 clat percentiles (usec): 00:42:50.322 | 1.00th=[ 180], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 188], 00:42:50.322 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 196], 60.00th=[ 198], 00:42:50.322 | 70.00th=[ 200], 80.00th=[ 204], 90.00th=[ 210], 95.00th=[ 221], 00:42:50.322 | 99.00th=[ 265], 99.50th=[ 289], 99.90th=[ 400], 99.95th=[ 400], 00:42:50.322 | 99.99th=[ 400] 00:42:50.322 bw ( KiB/s): min= 4096, max= 4096, per=20.24%, avg=4096.00, stdev= 0.00, samples=1 00:42:50.322 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:42:50.322 lat (usec) : 250=94.38%, 500=1.50% 00:42:50.322 lat (msec) : 50=4.12% 00:42:50.322 cpu : usr=0.69%, sys=0.79%, ctx=536, majf=0, minf=1 00:42:50.322 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:50.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:50.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:50.323 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:50.323 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:50.323 job2: (groupid=0, jobs=1): err= 0: pid=2989933: Fri Dec 13 03:53:51 2024 00:42:50.323 read: IOPS=1318, BW=5275KiB/s (5401kB/s)(5280KiB/1001msec) 00:42:50.323 slat (nsec): min=3389, max=29776, avg=7856.42, stdev=1693.99 00:42:50.323 clat (usec): min=266, max=41245, avg=482.99, stdev=2237.37 00:42:50.323 lat (usec): min=273, max=41256, avg=490.85, stdev=2237.75 00:42:50.323 clat percentiles (usec): 00:42:50.323 | 1.00th=[ 281], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 306], 00:42:50.323 | 30.00th=[ 310], 40.00th=[ 318], 50.00th=[ 330], 60.00th=[ 343], 00:42:50.323 | 70.00th=[ 383], 80.00th=[ 449], 90.00th=[ 461], 95.00th=[ 478], 00:42:50.323 | 99.00th=[ 578], 99.50th=[ 594], 99.90th=[41157], 99.95th=[41157], 00:42:50.323 | 99.99th=[41157] 00:42:50.323 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:42:50.323 slat (nsec): min=9481, max=44190, avg=11883.21, stdev=3137.41 00:42:50.323 clat (usec): min=142, max=668, avg=212.65, stdev=35.71 00:42:50.323 lat (usec): min=155, max=678, avg=224.53, stdev=35.78 00:42:50.323 clat percentiles (usec): 00:42:50.323 | 1.00th=[ 155], 5.00th=[ 165], 10.00th=[ 178], 20.00th=[ 190], 00:42:50.323 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 204], 60.00th=[ 215], 00:42:50.323 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 245], 95.00th=[ 253], 00:42:50.323 | 99.00th=[ 322], 99.50th=[ 351], 99.90th=[ 502], 99.95th=[ 668], 00:42:50.323 | 99.99th=[ 668] 00:42:50.323 bw ( KiB/s): min= 8192, max= 8192, per=40.48%, avg=8192.00, stdev= 0.00, samples=1 00:42:50.323 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:42:50.323 lat (usec) : 250=50.60%, 500=47.72%, 750=1.51%, 1000=0.04% 00:42:50.323 lat (msec) : 50=0.14% 00:42:50.323 cpu : usr=1.50%, sys=2.90%, ctx=2860, majf=0, minf=1 00:42:50.323 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:50.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:50.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:50.323 issued rwts: total=1320,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:50.323 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:50.323 job3: (groupid=0, jobs=1): err= 0: pid=2989934: Fri Dec 13 03:53:51 2024 00:42:50.323 read: IOPS=1124, BW=4499KiB/s (4606kB/s)(4512KiB/1003msec) 00:42:50.323 slat (nsec): min=6629, max=26493, avg=7657.92, stdev=1459.21 00:42:50.323 clat (usec): min=232, max=41046, avg=581.70, stdev=3411.77 00:42:50.323 lat (usec): min=239, max=41069, avg=589.36, stdev=3412.81 00:42:50.323 clat percentiles (usec): 00:42:50.323 | 1.00th=[ 239], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 251], 00:42:50.323 | 30.00th=[ 255], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 281], 00:42:50.323 | 70.00th=[ 293], 80.00th=[ 343], 90.00th=[ 375], 95.00th=[ 420], 00:42:50.323 | 99.00th=[ 553], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:42:50.323 | 99.99th=[41157] 00:42:50.323 write: IOPS=1531, BW=6126KiB/s (6273kB/s)(6144KiB/1003msec); 0 zone resets 00:42:50.323 slat (nsec): min=9509, max=42821, avg=10688.69, stdev=1758.70 00:42:50.323 clat (usec): min=147, max=1926, avg=205.11, stdev=57.97 00:42:50.323 lat (usec): min=157, max=1935, avg=215.80, stdev=58.21 00:42:50.323 clat percentiles (usec): 00:42:50.323 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 167], 00:42:50.323 | 30.00th=[ 178], 40.00th=[ 188], 50.00th=[ 200], 60.00th=[ 215], 00:42:50.323 | 70.00th=[ 227], 80.00th=[ 235], 90.00th=[ 249], 95.00th=[ 273], 00:42:50.323 | 99.00th=[ 310], 99.50th=[ 330], 99.90th=[ 383], 99.95th=[ 1926], 00:42:50.323 | 99.99th=[ 1926] 00:42:50.323 bw ( KiB/s): min= 4096, max= 8192, per=30.36%, avg=6144.00, stdev=2896.31, samples=2 00:42:50.323 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:42:50.323 lat (usec) : 250=60.40%, 500=38.66%, 750=0.60% 00:42:50.323 lat (msec) : 2=0.04%, 50=0.30% 00:42:50.323 cpu : usr=1.40%, sys=2.50%, ctx=2665, majf=0, minf=1 00:42:50.323 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:50.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:50.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:50.323 issued rwts: total=1128,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:50.323 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:50.323 00:42:50.323 Run status group 0 (all jobs): 00:42:50.323 READ: bw=15.4MiB/s (16.2MB/s), 87.0KiB/s-6106KiB/s (89.0kB/s-6252kB/s), io=15.6MiB (16.4MB), run=1001-1012msec 00:42:50.323 WRITE: bw=19.8MiB/s (20.7MB/s), 2024KiB/s-6138KiB/s (2072kB/s-6285kB/s), io=20.0MiB (21.0MB), run=1001-1012msec 00:42:50.323 00:42:50.323 Disk stats (read/write): 00:42:50.323 nvme0n1: ios=1065/1423, merge=0/0, ticks=1543/268, in_queue=1811, util=97.80% 00:42:50.323 nvme0n2: ios=49/512, merge=0/0, ticks=1696/97, in_queue=1793, util=97.76% 00:42:50.323 nvme0n3: ios=1066/1317, merge=0/0, ticks=1627/265, in_queue=1892, util=97.60% 00:42:50.323 nvme0n4: ios=1165/1536, merge=0/0, ticks=1695/307, in_queue=2002, util=96.95% 00:42:50.323 03:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:42:50.323 [global] 00:42:50.323 thread=1 00:42:50.323 invalidate=1 00:42:50.323 rw=write 00:42:50.323 time_based=1 00:42:50.323 runtime=1 00:42:50.323 ioengine=libaio 00:42:50.323 direct=1 00:42:50.323 bs=4096 00:42:50.323 iodepth=128 00:42:50.323 norandommap=0 00:42:50.323 numjobs=1 00:42:50.323 00:42:50.323 verify_dump=1 00:42:50.323 verify_backlog=512 00:42:50.323 verify_state_save=0 00:42:50.323 do_verify=1 00:42:50.323 verify=crc32c-intel 00:42:50.323 [job0] 00:42:50.323 filename=/dev/nvme0n1 00:42:50.323 [job1] 00:42:50.323 filename=/dev/nvme0n2 00:42:50.323 [job2] 00:42:50.323 filename=/dev/nvme0n3 00:42:50.323 [job3] 00:42:50.323 filename=/dev/nvme0n4 00:42:50.323 Could not set queue depth (nvme0n1) 00:42:50.323 Could not set queue depth (nvme0n2) 00:42:50.323 Could not set queue depth (nvme0n3) 00:42:50.323 Could not set queue depth (nvme0n4) 00:42:50.588 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:50.588 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:50.588 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:50.588 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:50.588 fio-3.35 00:42:50.588 Starting 4 threads 00:42:51.967 00:42:51.967 job0: (groupid=0, jobs=1): err= 0: pid=2990296: Fri Dec 13 03:53:52 2024 00:42:51.967 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:42:51.967 slat (nsec): min=1135, max=22925k, avg=117662.19, stdev=1089585.72 00:42:51.967 clat (usec): min=1294, max=68512, avg=16163.90, stdev=11092.42 00:42:51.967 lat (usec): min=1304, max=68520, avg=16281.56, stdev=11191.60 00:42:51.967 clat percentiles (usec): 00:42:51.967 | 1.00th=[ 4146], 5.00th=[ 6718], 10.00th=[ 8225], 20.00th=[ 8717], 00:42:51.967 | 30.00th=[ 9503], 40.00th=[10028], 50.00th=[10945], 60.00th=[12780], 00:42:51.967 | 70.00th=[16909], 80.00th=[24511], 90.00th=[32637], 95.00th=[40633], 00:42:51.967 | 99.00th=[55837], 99.50th=[63177], 99.90th=[68682], 99.95th=[68682], 00:42:51.967 | 99.99th=[68682] 00:42:51.967 write: IOPS=3751, BW=14.7MiB/s (15.4MB/s)(14.7MiB/1003msec); 0 zone resets 00:42:51.967 slat (usec): min=2, max=24901, avg=138.29, stdev=1081.45 00:42:51.967 clat (usec): min=1809, max=108897, avg=18413.44, stdev=19373.25 00:42:51.967 lat (msec): min=2, max=108, avg=18.55, stdev=19.52 00:42:51.967 clat percentiles (msec): 00:42:51.967 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 9], 00:42:51.967 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 12], 00:42:51.967 | 70.00th=[ 16], 80.00th=[ 23], 90.00th=[ 42], 95.00th=[ 64], 00:42:51.967 | 99.00th=[ 99], 99.50th=[ 102], 99.90th=[ 109], 99.95th=[ 109], 00:42:51.967 | 99.99th=[ 109] 00:42:51.967 bw ( KiB/s): min= 9120, max=19968, per=23.94%, avg=14544.00, stdev=7670.69, samples=2 00:42:51.967 iops : min= 2280, max= 4992, avg=3636.00, stdev=1917.67, samples=2 00:42:51.967 lat (msec) : 2=0.29%, 4=0.57%, 10=40.97%, 20=33.58%, 50=20.58% 00:42:51.967 lat (msec) : 100=3.62%, 250=0.39% 00:42:51.967 cpu : usr=2.40%, sys=4.89%, ctx=252, majf=0, minf=1 00:42:51.967 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:42:51.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:51.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:51.967 issued rwts: total=3584,3763,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:51.967 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:51.967 job1: (groupid=0, jobs=1): err= 0: pid=2990297: Fri Dec 13 03:53:52 2024 00:42:51.967 read: IOPS=4100, BW=16.0MiB/s (16.8MB/s)(16.2MiB/1010msec) 00:42:51.967 slat (nsec): min=1685, max=10825k, avg=92894.80, stdev=665067.77 00:42:51.967 clat (usec): min=3450, max=58612, avg=10757.82, stdev=5429.63 00:42:51.967 lat (usec): min=3458, max=58622, avg=10850.72, stdev=5509.29 00:42:51.967 clat percentiles (usec): 00:42:51.967 | 1.00th=[ 6718], 5.00th=[ 7308], 10.00th=[ 7635], 20.00th=[ 8029], 00:42:51.967 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9634], 00:42:51.967 | 70.00th=[10683], 80.00th=[12125], 90.00th=[14091], 95.00th=[17171], 00:42:51.967 | 99.00th=[41157], 99.50th=[47973], 99.90th=[54264], 99.95th=[54264], 00:42:51.967 | 99.99th=[58459] 00:42:51.967 write: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec); 0 zone resets 00:42:51.967 slat (usec): min=2, max=18020, avg=127.07, stdev=871.10 00:42:51.967 clat (usec): min=1449, max=166644, avg=18139.51, stdev=25267.63 00:42:51.967 lat (usec): min=1463, max=166654, avg=18266.58, stdev=25430.84 00:42:51.967 clat percentiles (msec): 00:42:51.967 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 8], 00:42:51.967 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 10], 00:42:51.967 | 70.00th=[ 11], 80.00th=[ 15], 90.00th=[ 56], 95.00th=[ 61], 00:42:51.967 | 99.00th=[ 144], 99.50th=[ 155], 99.90th=[ 167], 99.95th=[ 167], 00:42:51.967 | 99.99th=[ 167] 00:42:51.967 bw ( KiB/s): min=12288, max=23920, per=29.80%, avg=18104.00, stdev=8225.07, samples=2 00:42:51.967 iops : min= 3072, max= 5980, avg=4526.00, stdev=2056.27, samples=2 00:42:51.967 lat (msec) : 2=0.02%, 4=0.83%, 10=66.07%, 20=23.57%, 50=3.41% 00:42:51.967 lat (msec) : 100=4.66%, 250=1.44% 00:42:51.967 cpu : usr=3.87%, sys=6.24%, ctx=357, majf=0, minf=2 00:42:51.967 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:42:51.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:51.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:51.967 issued rwts: total=4142,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:51.967 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:51.967 job2: (groupid=0, jobs=1): err= 0: pid=2990298: Fri Dec 13 03:53:52 2024 00:42:51.967 read: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1012msec) 00:42:51.967 slat (nsec): min=1649, max=18414k, avg=120113.49, stdev=865310.90 00:42:51.967 clat (usec): min=747, max=54446, avg=14390.07, stdev=6616.77 00:42:51.967 lat (usec): min=762, max=54541, avg=14510.19, stdev=6701.82 00:42:51.967 clat percentiles (usec): 00:42:51.967 | 1.00th=[ 1958], 5.00th=[ 7373], 10.00th=[ 8848], 20.00th=[10290], 00:42:51.967 | 30.00th=[10814], 40.00th=[12387], 50.00th=[13566], 60.00th=[14353], 00:42:51.967 | 70.00th=[15008], 80.00th=[17695], 90.00th=[20317], 95.00th=[27132], 00:42:51.967 | 99.00th=[40633], 99.50th=[52691], 99.90th=[54264], 99.95th=[54264], 00:42:51.967 | 99.99th=[54264] 00:42:51.967 write: IOPS=4188, BW=16.4MiB/s (17.2MB/s)(16.6MiB/1012msec); 0 zone resets 00:42:51.967 slat (usec): min=2, max=14632, avg=107.64, stdev=753.31 00:42:51.967 clat (usec): min=470, max=87320, avg=16355.31, stdev=12713.35 00:42:51.967 lat (usec): min=496, max=87331, avg=16462.96, stdev=12771.11 00:42:51.967 clat percentiles (usec): 00:42:51.967 | 1.00th=[ 1004], 5.00th=[ 5735], 10.00th=[ 8225], 20.00th=[ 9372], 00:42:51.967 | 30.00th=[10421], 40.00th=[11469], 50.00th=[13042], 60.00th=[14222], 00:42:51.967 | 70.00th=[16909], 80.00th=[20055], 90.00th=[28443], 95.00th=[32637], 00:42:51.967 | 99.00th=[82314], 99.50th=[84411], 99.90th=[84411], 99.95th=[84411], 00:42:51.967 | 99.99th=[87557] 00:42:51.967 bw ( KiB/s): min=13912, max=18976, per=27.07%, avg=16444.00, stdev=3580.79, samples=2 00:42:51.967 iops : min= 3478, max= 4744, avg=4111.00, stdev=895.20, samples=2 00:42:51.967 lat (usec) : 500=0.02%, 750=0.10%, 1000=0.41% 00:42:51.967 lat (msec) : 2=1.42%, 4=0.95%, 10=17.53%, 20=64.08%, 50=13.82% 00:42:51.967 lat (msec) : 100=1.68% 00:42:51.967 cpu : usr=3.17%, sys=6.23%, ctx=301, majf=0, minf=1 00:42:51.967 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:42:51.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:51.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:51.967 issued rwts: total=4096,4239,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:51.967 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:51.967 job3: (groupid=0, jobs=1): err= 0: pid=2990299: Fri Dec 13 03:53:52 2024 00:42:51.967 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:42:51.967 slat (nsec): min=1775, max=27992k, avg=183889.93, stdev=1337535.27 00:42:51.967 clat (usec): min=10036, max=95063, avg=25322.69, stdev=18984.08 00:42:51.967 lat (usec): min=10289, max=95092, avg=25506.58, stdev=19127.00 00:42:51.967 clat percentiles (usec): 00:42:51.967 | 1.00th=[10421], 5.00th=[11469], 10.00th=[12125], 20.00th=[12911], 00:42:51.967 | 30.00th=[13698], 40.00th=[14091], 50.00th=[15008], 60.00th=[17171], 00:42:51.967 | 70.00th=[25560], 80.00th=[37487], 90.00th=[58983], 95.00th=[68682], 00:42:51.967 | 99.00th=[82314], 99.50th=[82314], 99.90th=[86508], 99.95th=[91751], 00:42:51.967 | 99.99th=[94897] 00:42:51.967 write: IOPS=2750, BW=10.7MiB/s (11.3MB/s)(10.8MiB/1004msec); 0 zone resets 00:42:51.967 slat (usec): min=2, max=23405, avg=184.85, stdev=1216.06 00:42:51.967 clat (usec): min=1570, max=104944, avg=21584.42, stdev=18348.66 00:42:51.967 lat (msec): min=4, max=104, avg=21.77, stdev=18.48 00:42:51.967 clat percentiles (msec): 00:42:51.967 | 1.00th=[ 6], 5.00th=[ 10], 10.00th=[ 12], 20.00th=[ 12], 00:42:51.967 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 17], 60.00th=[ 18], 00:42:51.967 | 70.00th=[ 20], 80.00th=[ 23], 90.00th=[ 42], 95.00th=[ 61], 00:42:51.967 | 99.00th=[ 102], 99.50th=[ 105], 99.90th=[ 106], 99.95th=[ 106], 00:42:51.967 | 99.99th=[ 106] 00:42:51.967 bw ( KiB/s): min= 8776, max=12288, per=17.34%, avg=10532.00, stdev=2483.36, samples=2 00:42:51.967 iops : min= 2194, max= 3072, avg=2633.00, stdev=620.84, samples=2 00:42:51.967 lat (msec) : 2=0.02%, 10=2.61%, 20=65.46%, 50=21.48%, 100=9.64% 00:42:51.967 lat (msec) : 250=0.79% 00:42:51.967 cpu : usr=2.49%, sys=4.39%, ctx=214, majf=0, minf=1 00:42:51.967 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:42:51.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:51.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:51.967 issued rwts: total=2560,2761,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:51.967 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:51.967 00:42:51.967 Run status group 0 (all jobs): 00:42:51.967 READ: bw=55.5MiB/s (58.2MB/s), 9.96MiB/s-16.0MiB/s (10.4MB/s-16.8MB/s), io=56.2MiB (58.9MB), run=1003-1012msec 00:42:51.968 WRITE: bw=59.3MiB/s (62.2MB/s), 10.7MiB/s-17.8MiB/s (11.3MB/s-18.7MB/s), io=60.0MiB (63.0MB), run=1003-1012msec 00:42:51.968 00:42:51.968 Disk stats (read/write): 00:42:51.968 nvme0n1: ios=3124/3199, merge=0/0, ticks=41196/37410, in_queue=78606, util=98.40% 00:42:51.968 nvme0n2: ios=3634/3814, merge=0/0, ticks=37705/67549, in_queue=105254, util=98.48% 00:42:51.968 nvme0n3: ios=3613/3647, merge=0/0, ticks=42660/47336, in_queue=89996, util=96.36% 00:42:51.968 nvme0n4: ios=2067/2295, merge=0/0, ticks=23342/23682, in_queue=47024, util=98.32% 00:42:51.968 03:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:42:51.968 [global] 00:42:51.968 thread=1 00:42:51.968 invalidate=1 00:42:51.968 rw=randwrite 00:42:51.968 time_based=1 00:42:51.968 runtime=1 00:42:51.968 ioengine=libaio 00:42:51.968 direct=1 00:42:51.968 bs=4096 00:42:51.968 iodepth=128 00:42:51.968 norandommap=0 00:42:51.968 numjobs=1 00:42:51.968 00:42:51.968 verify_dump=1 00:42:51.968 verify_backlog=512 00:42:51.968 verify_state_save=0 00:42:51.968 do_verify=1 00:42:51.968 verify=crc32c-intel 00:42:51.968 [job0] 00:42:51.968 filename=/dev/nvme0n1 00:42:51.968 [job1] 00:42:51.968 filename=/dev/nvme0n2 00:42:51.968 [job2] 00:42:51.968 filename=/dev/nvme0n3 00:42:51.968 [job3] 00:42:51.968 filename=/dev/nvme0n4 00:42:51.968 Could not set queue depth (nvme0n1) 00:42:51.968 Could not set queue depth (nvme0n2) 00:42:51.968 Could not set queue depth (nvme0n3) 00:42:51.968 Could not set queue depth (nvme0n4) 00:42:51.968 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:51.968 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:51.968 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:51.968 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:51.968 fio-3.35 00:42:51.968 Starting 4 threads 00:42:53.344 00:42:53.344 job0: (groupid=0, jobs=1): err= 0: pid=2990661: Fri Dec 13 03:53:54 2024 00:42:53.344 read: IOPS=4027, BW=15.7MiB/s (16.5MB/s)(16.0MiB/1017msec) 00:42:53.344 slat (nsec): min=1362, max=14949k, avg=111078.41, stdev=809898.57 00:42:53.344 clat (usec): min=4830, max=54677, avg=13122.57, stdev=6233.02 00:42:53.344 lat (usec): min=4840, max=54680, avg=13233.65, stdev=6305.56 00:42:53.344 clat percentiles (usec): 00:42:53.344 | 1.00th=[ 7635], 5.00th=[ 8029], 10.00th=[ 8291], 20.00th=[ 8979], 00:42:53.344 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[12387], 00:42:53.344 | 70.00th=[14353], 80.00th=[17171], 90.00th=[21890], 95.00th=[23987], 00:42:53.344 | 99.00th=[35390], 99.50th=[52691], 99.90th=[54789], 99.95th=[54789], 00:42:53.344 | 99.99th=[54789] 00:42:53.344 write: IOPS=4473, BW=17.5MiB/s (18.3MB/s)(17.8MiB/1017msec); 0 zone resets 00:42:53.344 slat (usec): min=2, max=14452, avg=114.54, stdev=664.00 00:42:53.344 clat (usec): min=3467, max=54679, avg=16383.31, stdev=9642.45 00:42:53.344 lat (usec): min=3493, max=54683, avg=16497.84, stdev=9689.85 00:42:53.344 clat percentiles (usec): 00:42:53.344 | 1.00th=[ 4817], 5.00th=[ 6325], 10.00th=[ 7963], 20.00th=[ 8455], 00:42:53.344 | 30.00th=[10159], 40.00th=[12387], 50.00th=[16057], 60.00th=[17171], 00:42:53.344 | 70.00th=[17433], 80.00th=[17957], 90.00th=[30016], 95.00th=[40633], 00:42:53.344 | 99.00th=[46924], 99.50th=[47449], 99.90th=[47449], 99.95th=[47449], 00:42:53.344 | 99.99th=[54789] 00:42:53.344 bw ( KiB/s): min=16768, max=18608, per=31.01%, avg=17688.00, stdev=1301.08, samples=2 00:42:53.344 iops : min= 4192, max= 4652, avg=4422.00, stdev=325.27, samples=2 00:42:53.344 lat (msec) : 4=0.23%, 10=39.14%, 20=45.92%, 50=14.45%, 100=0.27% 00:42:53.344 cpu : usr=3.74%, sys=5.02%, ctx=410, majf=0, minf=1 00:42:53.344 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:42:53.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:53.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:53.344 issued rwts: total=4096,4550,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:53.344 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:53.344 job1: (groupid=0, jobs=1): err= 0: pid=2990662: Fri Dec 13 03:53:54 2024 00:42:53.344 read: IOPS=3162, BW=12.4MiB/s (13.0MB/s)(13.0MiB/1051msec) 00:42:53.344 slat (nsec): min=1377, max=13767k, avg=121758.45, stdev=859327.34 00:42:53.344 clat (usec): min=3484, max=52875, avg=16626.86, stdev=9135.08 00:42:53.344 lat (usec): min=3495, max=62137, avg=16748.61, stdev=9169.01 00:42:53.344 clat percentiles (usec): 00:42:53.344 | 1.00th=[ 5407], 5.00th=[ 7963], 10.00th=[ 9110], 20.00th=[ 9372], 00:42:53.344 | 30.00th=[ 9765], 40.00th=[13042], 50.00th=[14877], 60.00th=[17171], 00:42:53.344 | 70.00th=[20055], 80.00th=[21365], 90.00th=[23725], 95.00th=[28967], 00:42:53.344 | 99.00th=[52691], 99.50th=[52691], 99.90th=[52691], 99.95th=[52691], 00:42:53.344 | 99.99th=[52691] 00:42:53.344 write: IOPS=3410, BW=13.3MiB/s (14.0MB/s)(14.0MiB/1051msec); 0 zone resets 00:42:53.344 slat (usec): min=2, max=14027, avg=162.78, stdev=821.52 00:42:53.344 clat (usec): min=1492, max=86847, avg=21760.97, stdev=16951.36 00:42:53.344 lat (usec): min=1507, max=86860, avg=21923.75, stdev=17047.21 00:42:53.344 clat percentiles (usec): 00:42:53.344 | 1.00th=[ 3785], 5.00th=[ 6849], 10.00th=[ 8291], 20.00th=[10683], 00:42:53.344 | 30.00th=[13829], 40.00th=[16057], 50.00th=[17171], 60.00th=[17433], 00:42:53.344 | 70.00th=[17695], 80.00th=[27132], 90.00th=[46924], 95.00th=[63701], 00:42:53.344 | 99.00th=[84411], 99.50th=[86508], 99.90th=[86508], 99.95th=[86508], 00:42:53.344 | 99.99th=[86508] 00:42:53.344 bw ( KiB/s): min=13336, max=15336, per=25.13%, avg=14336.00, stdev=1414.21, samples=2 00:42:53.344 iops : min= 3334, max= 3834, avg=3584.00, stdev=353.55, samples=2 00:42:53.344 lat (msec) : 2=0.10%, 4=0.64%, 10=23.35%, 20=48.87%, 50=21.37% 00:42:53.344 lat (msec) : 100=5.67% 00:42:53.344 cpu : usr=2.95%, sys=3.24%, ctx=405, majf=0, minf=1 00:42:53.344 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:42:53.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:53.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:53.344 issued rwts: total=3324,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:53.344 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:53.344 job2: (groupid=0, jobs=1): err= 0: pid=2990663: Fri Dec 13 03:53:54 2024 00:42:53.344 read: IOPS=3392, BW=13.3MiB/s (13.9MB/s)(13.4MiB/1008msec) 00:42:53.344 slat (nsec): min=1521, max=17961k, avg=152499.14, stdev=1142938.39 00:42:53.344 clat (usec): min=4707, max=62527, avg=17736.73, stdev=8232.65 00:42:53.344 lat (usec): min=4903, max=62531, avg=17889.23, stdev=8346.91 00:42:53.344 clat percentiles (usec): 00:42:53.344 | 1.00th=[ 8291], 5.00th=[10290], 10.00th=[10552], 20.00th=[10814], 00:42:53.344 | 30.00th=[11731], 40.00th=[15533], 50.00th=[17433], 60.00th=[18482], 00:42:53.344 | 70.00th=[19268], 80.00th=[21365], 90.00th=[26346], 95.00th=[28967], 00:42:53.344 | 99.00th=[59507], 99.50th=[61604], 99.90th=[62653], 99.95th=[62653], 00:42:53.344 | 99.99th=[62653] 00:42:53.344 write: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec); 0 zone resets 00:42:53.344 slat (usec): min=2, max=18469, avg=127.09, stdev=1019.83 00:42:53.344 clat (usec): min=3666, max=62512, avg=18645.89, stdev=8627.11 00:42:53.344 lat (usec): min=3676, max=62515, avg=18772.98, stdev=8678.91 00:42:53.344 clat percentiles (usec): 00:42:53.344 | 1.00th=[ 4883], 5.00th=[ 8717], 10.00th=[ 9634], 20.00th=[13698], 00:42:53.344 | 30.00th=[15664], 40.00th=[16188], 50.00th=[18220], 60.00th=[19006], 00:42:53.344 | 70.00th=[19792], 80.00th=[20579], 90.00th=[26346], 95.00th=[33817], 00:42:53.344 | 99.00th=[54264], 99.50th=[54264], 99.90th=[61604], 99.95th=[62653], 00:42:53.344 | 99.99th=[62653] 00:42:53.344 bw ( KiB/s): min=13136, max=15536, per=25.13%, avg=14336.00, stdev=1697.06, samples=2 00:42:53.344 iops : min= 3284, max= 3884, avg=3584.00, stdev=424.26, samples=2 00:42:53.344 lat (msec) : 4=0.29%, 10=7.67%, 20=66.66%, 50=22.90%, 100=2.48% 00:42:53.344 cpu : usr=3.38%, sys=4.27%, ctx=240, majf=0, minf=1 00:42:53.344 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:42:53.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:53.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:53.344 issued rwts: total=3420,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:53.344 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:53.344 job3: (groupid=0, jobs=1): err= 0: pid=2990664: Fri Dec 13 03:53:54 2024 00:42:53.344 read: IOPS=3023, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1016msec) 00:42:53.344 slat (nsec): min=1447, max=17954k, avg=142685.16, stdev=1148319.07 00:42:53.344 clat (usec): min=3786, max=36682, avg=17695.03, stdev=5596.30 00:42:53.344 lat (usec): min=3798, max=42499, avg=17837.71, stdev=5699.51 00:42:53.344 clat percentiles (usec): 00:42:53.344 | 1.00th=[ 6325], 5.00th=[10814], 10.00th=[10945], 20.00th=[11207], 00:42:53.344 | 30.00th=[14746], 40.00th=[16450], 50.00th=[17695], 60.00th=[18744], 00:42:53.344 | 70.00th=[19792], 80.00th=[21890], 90.00th=[26084], 95.00th=[28181], 00:42:53.344 | 99.00th=[31589], 99.50th=[32900], 99.90th=[36439], 99.95th=[36439], 00:42:53.344 | 99.99th=[36439] 00:42:53.344 write: IOPS=3218, BW=12.6MiB/s (13.2MB/s)(12.8MiB/1016msec); 0 zone resets 00:42:53.344 slat (usec): min=2, max=17249, avg=165.70, stdev=1137.72 00:42:53.344 clat (usec): min=1521, max=82641, avg=22693.35, stdev=12780.62 00:42:53.344 lat (usec): min=1534, max=82653, avg=22859.05, stdev=12854.04 00:42:53.344 clat percentiles (usec): 00:42:53.344 | 1.00th=[ 4817], 5.00th=[12256], 10.00th=[14353], 20.00th=[16319], 00:42:53.344 | 30.00th=[17433], 40.00th=[18482], 50.00th=[19268], 60.00th=[19792], 00:42:53.344 | 70.00th=[20055], 80.00th=[25822], 90.00th=[41157], 95.00th=[52691], 00:42:53.344 | 99.00th=[79168], 99.50th=[81265], 99.90th=[82314], 99.95th=[82314], 00:42:53.344 | 99.99th=[82314] 00:42:53.344 bw ( KiB/s): min=11504, max=13632, per=22.03%, avg=12568.00, stdev=1504.72, samples=2 00:42:53.344 iops : min= 2876, max= 3408, avg=3142.00, stdev=376.18, samples=2 00:42:53.344 lat (msec) : 2=0.03%, 4=0.73%, 10=1.62%, 20=67.41%, 50=27.25% 00:42:53.344 lat (msec) : 100=2.96% 00:42:53.344 cpu : usr=2.96%, sys=4.43%, ctx=235, majf=0, minf=1 00:42:53.344 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:42:53.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:53.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:53.344 issued rwts: total=3072,3270,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:53.344 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:53.344 00:42:53.344 Run status group 0 (all jobs): 00:42:53.344 READ: bw=51.7MiB/s (54.2MB/s), 11.8MiB/s-15.7MiB/s (12.4MB/s-16.5MB/s), io=54.3MiB (57.0MB), run=1008-1051msec 00:42:53.344 WRITE: bw=55.7MiB/s (58.4MB/s), 12.6MiB/s-17.5MiB/s (13.2MB/s-18.3MB/s), io=58.5MiB (61.4MB), run=1008-1051msec 00:42:53.344 00:42:53.344 Disk stats (read/write): 00:42:53.344 nvme0n1: ios=3617/3887, merge=0/0, ticks=46672/55561, in_queue=102233, util=99.40% 00:42:53.344 nvme0n2: ios=2575/3071, merge=0/0, ticks=40538/65301, in_queue=105839, util=98.48% 00:42:53.344 nvme0n3: ios=2970/3072, merge=0/0, ticks=51757/53923, in_queue=105680, util=96.16% 00:42:53.345 nvme0n4: ios=2575/2647, merge=0/0, ticks=47388/55217, in_queue=102605, util=98.12% 00:42:53.345 03:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:42:53.345 03:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2990892 00:42:53.345 03:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:42:53.345 03:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:42:53.345 [global] 00:42:53.345 thread=1 00:42:53.345 invalidate=1 00:42:53.345 rw=read 00:42:53.345 time_based=1 00:42:53.345 runtime=10 00:42:53.345 ioengine=libaio 00:42:53.345 direct=1 00:42:53.345 bs=4096 00:42:53.345 iodepth=1 00:42:53.345 norandommap=1 00:42:53.345 numjobs=1 00:42:53.345 00:42:53.345 [job0] 00:42:53.345 filename=/dev/nvme0n1 00:42:53.345 [job1] 00:42:53.345 filename=/dev/nvme0n2 00:42:53.345 [job2] 00:42:53.345 filename=/dev/nvme0n3 00:42:53.345 [job3] 00:42:53.345 filename=/dev/nvme0n4 00:42:53.345 Could not set queue depth (nvme0n1) 00:42:53.345 Could not set queue depth (nvme0n2) 00:42:53.345 Could not set queue depth (nvme0n3) 00:42:53.345 Could not set queue depth (nvme0n4) 00:42:53.603 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:53.603 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:53.603 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:53.603 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:53.603 fio-3.35 00:42:53.603 Starting 4 threads 00:42:56.890 03:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:42:56.890 03:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:42:56.890 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=274432, buflen=4096 00:42:56.890 fio: pid=2991032, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:56.890 03:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:56.890 03:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:42:56.890 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=331776, buflen=4096 00:42:56.890 fio: pid=2991031, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:56.890 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=311296, buflen=4096 00:42:56.890 fio: pid=2991029, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:56.890 03:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:56.890 03:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:42:57.148 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=339968, buflen=4096 00:42:57.148 fio: pid=2991030, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:57.148 00:42:57.148 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2991029: Fri Dec 13 03:53:58 2024 00:42:57.148 read: IOPS=24, BW=95.4KiB/s (97.7kB/s)(304KiB/3186msec) 00:42:57.148 slat (usec): min=11, max=28835, avg=397.46, stdev=3283.43 00:42:57.148 clat (usec): min=407, max=98412, avg=41227.57, stdev=8119.69 00:42:57.148 lat (usec): min=437, max=98434, avg=41629.91, stdev=8769.23 00:42:57.148 clat percentiles (usec): 00:42:57.148 | 1.00th=[ 408], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:42:57.148 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:57.148 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:57.148 | 99.00th=[98042], 99.50th=[98042], 99.90th=[98042], 99.95th=[98042], 00:42:57.148 | 99.99th=[98042] 00:42:57.148 bw ( KiB/s): min= 93, max= 104, per=26.63%, avg=96.83, stdev= 3.71, samples=6 00:42:57.148 iops : min= 23, max= 26, avg=24.17, stdev= 0.98, samples=6 00:42:57.148 lat (usec) : 500=1.30% 00:42:57.148 lat (msec) : 50=96.10%, 100=1.30% 00:42:57.148 cpu : usr=0.13%, sys=0.00%, ctx=78, majf=0, minf=2 00:42:57.148 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:57.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.148 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.148 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:57.148 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:57.148 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2991030: Fri Dec 13 03:53:58 2024 00:42:57.148 read: IOPS=24, BW=97.4KiB/s (99.8kB/s)(332KiB/3407msec) 00:42:57.148 slat (usec): min=10, max=20789, avg=271.20, stdev=2265.69 00:42:57.148 clat (usec): min=540, max=41980, avg=40512.95, stdev=4444.16 00:42:57.148 lat (usec): min=568, max=62045, avg=40786.56, stdev=5031.74 00:42:57.148 clat percentiles (usec): 00:42:57.148 | 1.00th=[ 545], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:42:57.148 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:57.148 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:57.148 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:42:57.148 | 99.99th=[42206] 00:42:57.148 bw ( KiB/s): min= 93, max= 104, per=27.19%, avg=98.17, stdev= 4.67, samples=6 00:42:57.148 iops : min= 23, max= 26, avg=24.50, stdev= 1.22, samples=6 00:42:57.148 lat (usec) : 750=1.19% 00:42:57.148 lat (msec) : 50=97.62% 00:42:57.148 cpu : usr=0.00%, sys=0.12%, ctx=86, majf=0, minf=2 00:42:57.148 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:57.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.148 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.148 issued rwts: total=84,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:57.148 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:57.148 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2991031: Fri Dec 13 03:53:58 2024 00:42:57.148 read: IOPS=27, BW=110KiB/s (113kB/s)(324KiB/2940msec) 00:42:57.148 slat (nsec): min=4000, max=30883, avg=20950.84, stdev=5803.50 00:42:57.148 clat (usec): min=220, max=42013, avg=36002.38, stdev=13464.82 00:42:57.148 lat (usec): min=227, max=42044, avg=36023.29, stdev=13468.93 00:42:57.148 clat percentiles (usec): 00:42:57.148 | 1.00th=[ 221], 5.00th=[ 351], 10.00th=[ 412], 20.00th=[40633], 00:42:57.149 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:57.149 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:57.149 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:42:57.149 | 99.99th=[42206] 00:42:57.149 bw ( KiB/s): min= 96, max= 167, per=30.80%, avg=111.80, stdev=31.05, samples=5 00:42:57.149 iops : min= 24, max= 41, avg=27.80, stdev= 7.43, samples=5 00:42:57.149 lat (usec) : 250=1.22%, 500=10.98% 00:42:57.149 lat (msec) : 50=86.59% 00:42:57.149 cpu : usr=0.10%, sys=0.00%, ctx=83, majf=0, minf=1 00:42:57.149 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:57.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.149 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.149 issued rwts: total=82,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:57.149 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:57.149 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2991032: Fri Dec 13 03:53:58 2024 00:42:57.149 read: IOPS=24, BW=98.1KiB/s (100kB/s)(268KiB/2731msec) 00:42:57.149 slat (nsec): min=14652, max=32758, avg=23382.87, stdev=2174.66 00:42:57.149 clat (usec): min=508, max=42989, avg=40417.41, stdev=4957.79 00:42:57.149 lat (usec): min=541, max=43018, avg=40440.82, stdev=4956.65 00:42:57.149 clat percentiles (usec): 00:42:57.149 | 1.00th=[ 510], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:42:57.149 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:57.149 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:57.149 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:42:57.149 | 99.99th=[42730] 00:42:57.149 bw ( KiB/s): min= 96, max= 104, per=26.91%, avg=97.60, stdev= 3.58, samples=5 00:42:57.149 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:42:57.149 lat (usec) : 750=1.47% 00:42:57.149 lat (msec) : 50=97.06% 00:42:57.149 cpu : usr=0.11%, sys=0.00%, ctx=68, majf=0, minf=2 00:42:57.149 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:57.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.149 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:57.149 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:57.149 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:57.149 00:42:57.149 Run status group 0 (all jobs): 00:42:57.149 READ: bw=360KiB/s (369kB/s), 95.4KiB/s-110KiB/s (97.7kB/s-113kB/s), io=1228KiB (1257kB), run=2731-3407msec 00:42:57.149 00:42:57.149 Disk stats (read/write): 00:42:57.149 nvme0n1: ios=75/0, merge=0/0, ticks=3037/0, in_queue=3037, util=94.88% 00:42:57.149 nvme0n2: ios=102/0, merge=0/0, ticks=3626/0, in_queue=3626, util=99.40% 00:42:57.149 nvme0n3: ios=115/0, merge=0/0, ticks=2990/0, in_queue=2990, util=99.83% 00:42:57.149 nvme0n4: ios=92/0, merge=0/0, ticks=2665/0, in_queue=2665, util=97.97% 00:42:57.407 03:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:57.407 03:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:42:57.666 03:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:57.666 03:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:42:57.666 03:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:57.666 03:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:42:57.924 03:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:57.924 03:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:42:58.183 03:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:58.183 03:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:42:58.442 03:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:42:58.442 03:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2990892 00:42:58.442 03:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:42:58.442 03:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:42:59.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:42:59.819 03:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:42:59.819 03:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:42:59.819 03:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:42:59.819 03:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:59.819 03:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:42:59.819 03:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:59.819 03:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:42:59.819 03:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:42:59.819 03:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:42:59.819 nvmf hotplug test: fio failed as expected 00:42:59.819 03:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:59.819 03:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:42:59.819 03:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:42:59.819 03:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:42:59.819 03:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:42:59.819 03:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:42:59.819 03:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:59.819 03:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:42:59.819 03:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:59.819 03:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:42:59.819 03:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:59.819 03:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:59.819 rmmod nvme_tcp 00:42:59.819 rmmod nvme_fabrics 00:42:59.819 rmmod nvme_keyring 00:42:59.819 03:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:59.819 03:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:42:59.819 03:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:42:59.819 03:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2988254 ']' 00:42:59.819 03:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2988254 00:42:59.819 03:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2988254 ']' 00:42:59.819 03:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2988254 00:42:59.819 03:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:42:59.819 03:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:59.819 03:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2988254 00:43:00.076 03:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:00.076 03:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:00.076 03:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2988254' 00:43:00.076 killing process with pid 2988254 00:43:00.076 03:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2988254 00:43:00.076 03:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2988254 00:43:01.010 03:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:01.010 03:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:01.010 03:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:01.010 03:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:43:01.010 03:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:43:01.010 03:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:01.010 03:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:43:01.010 03:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:01.010 03:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:01.010 03:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:01.010 03:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:01.010 03:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:03.547 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:03.547 00:43:03.547 real 0m28.629s 00:43:03.547 user 1m38.856s 00:43:03.547 sys 0m10.366s 00:43:03.547 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:03.547 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:03.547 ************************************ 00:43:03.547 END TEST nvmf_fio_target 00:43:03.547 ************************************ 00:43:03.547 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:43:03.547 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:43:03.547 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:03.547 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:03.547 ************************************ 00:43:03.547 START TEST nvmf_bdevio 00:43:03.547 ************************************ 00:43:03.547 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:43:03.547 * Looking for test storage... 00:43:03.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:03.547 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:03.547 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:43:03.547 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:03.547 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:03.547 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:03.547 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:03.547 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:03.547 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:43:03.547 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:43:03.547 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:43:03.547 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:43:03.547 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:43:03.547 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:43:03.547 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:43:03.547 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:03.547 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:43:03.547 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:43:03.547 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:03.547 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:03.547 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:43:03.547 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:03.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:03.548 --rc genhtml_branch_coverage=1 00:43:03.548 --rc genhtml_function_coverage=1 00:43:03.548 --rc genhtml_legend=1 00:43:03.548 --rc geninfo_all_blocks=1 00:43:03.548 --rc geninfo_unexecuted_blocks=1 00:43:03.548 00:43:03.548 ' 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:03.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:03.548 --rc genhtml_branch_coverage=1 00:43:03.548 --rc genhtml_function_coverage=1 00:43:03.548 --rc genhtml_legend=1 00:43:03.548 --rc geninfo_all_blocks=1 00:43:03.548 --rc geninfo_unexecuted_blocks=1 00:43:03.548 00:43:03.548 ' 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:03.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:03.548 --rc genhtml_branch_coverage=1 00:43:03.548 --rc genhtml_function_coverage=1 00:43:03.548 --rc genhtml_legend=1 00:43:03.548 --rc geninfo_all_blocks=1 00:43:03.548 --rc geninfo_unexecuted_blocks=1 00:43:03.548 00:43:03.548 ' 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:03.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:03.548 --rc genhtml_branch_coverage=1 00:43:03.548 --rc genhtml_function_coverage=1 00:43:03.548 --rc genhtml_legend=1 00:43:03.548 --rc geninfo_all_blocks=1 00:43:03.548 --rc geninfo_unexecuted_blocks=1 00:43:03.548 00:43:03.548 ' 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:03.548 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:03.549 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:03.549 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:03.549 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:43:03.549 03:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:08.822 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:08.822 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:43:08.823 Found 0000:af:00.0 (0x8086 - 0x159b) 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:43:08.823 Found 0000:af:00.1 (0x8086 - 0x159b) 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:43:08.823 Found net devices under 0000:af:00.0: cvl_0_0 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:43:08.823 Found net devices under 0000:af:00.1: cvl_0_1 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:08.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:08.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:43:08.823 00:43:08.823 --- 10.0.0.2 ping statistics --- 00:43:08.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:08.823 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:43:08.823 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:08.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:08.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:43:08.823 00:43:08.824 --- 10.0.0.1 ping statistics --- 00:43:08.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:08.824 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:43:08.824 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:08.824 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:43:08.824 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:08.824 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:08.824 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:08.824 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:08.824 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:08.824 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:08.824 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:08.824 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:43:08.824 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:08.824 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:08.824 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:08.824 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2996040 00:43:08.824 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:43:08.824 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2996040 00:43:08.824 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2996040 ']' 00:43:08.824 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:08.824 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:08.824 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:08.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:08.824 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:08.824 03:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:09.083 [2024-12-13 03:54:10.052353] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:09.083 [2024-12-13 03:54:10.054489] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:43:09.083 [2024-12-13 03:54:10.054561] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:09.083 [2024-12-13 03:54:10.176509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:09.342 [2024-12-13 03:54:10.292539] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:09.342 [2024-12-13 03:54:10.292583] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:09.342 [2024-12-13 03:54:10.292596] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:09.342 [2024-12-13 03:54:10.292606] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:09.342 [2024-12-13 03:54:10.292616] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:09.342 [2024-12-13 03:54:10.295044] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:43:09.342 [2024-12-13 03:54:10.295082] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:43:09.342 [2024-12-13 03:54:10.295153] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:43:09.342 [2024-12-13 03:54:10.295177] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:43:09.601 [2024-12-13 03:54:10.633754] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:09.601 [2024-12-13 03:54:10.635216] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:09.601 [2024-12-13 03:54:10.636645] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:43:09.601 [2024-12-13 03:54:10.637351] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:09.601 [2024-12-13 03:54:10.637631] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:43:09.860 03:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:09.860 03:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:43:09.860 03:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:09.860 03:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:09.860 03:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:09.860 03:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:09.860 03:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:09.860 03:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:09.860 03:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:09.860 [2024-12-13 03:54:10.912031] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:09.860 03:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.860 03:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:43:09.860 03:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:09.860 03:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:09.860 Malloc0 00:43:09.860 03:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.860 03:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:43:09.860 03:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:09.860 03:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:09.860 03:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.860 03:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:09.861 03:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:09.861 03:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:09.861 03:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.861 03:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:09.861 03:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:09.861 03:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:09.861 [2024-12-13 03:54:11.040347] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:09.861 03:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:09.861 03:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:43:09.861 03:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:43:09.861 03:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:43:09.861 03:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:43:09.861 03:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:09.861 03:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:09.861 { 00:43:09.861 "params": { 00:43:09.861 "name": "Nvme$subsystem", 00:43:09.861 "trtype": "$TEST_TRANSPORT", 00:43:09.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:09.861 "adrfam": "ipv4", 00:43:09.861 "trsvcid": "$NVMF_PORT", 00:43:09.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:09.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:09.861 "hdgst": ${hdgst:-false}, 00:43:09.861 "ddgst": ${ddgst:-false} 00:43:09.861 }, 00:43:09.861 "method": "bdev_nvme_attach_controller" 00:43:09.861 } 00:43:09.861 EOF 00:43:09.861 )") 00:43:09.861 03:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:43:09.861 03:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:43:09.861 03:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:43:09.861 03:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:09.861 "params": { 00:43:09.861 "name": "Nvme1", 00:43:09.861 "trtype": "tcp", 00:43:09.861 "traddr": "10.0.0.2", 00:43:09.861 "adrfam": "ipv4", 00:43:09.861 "trsvcid": "4420", 00:43:09.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:09.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:09.861 "hdgst": false, 00:43:09.861 "ddgst": false 00:43:09.861 }, 00:43:09.861 "method": "bdev_nvme_attach_controller" 00:43:09.861 }' 00:43:10.120 [2024-12-13 03:54:11.113845] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:43:10.120 [2024-12-13 03:54:11.113932] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2996389 ] 00:43:10.120 [2024-12-13 03:54:11.227432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:10.379 [2024-12-13 03:54:11.336639] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:43:10.379 [2024-12-13 03:54:11.336707] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:43:10.379 [2024-12-13 03:54:11.336712] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:43:10.947 I/O targets: 00:43:10.947 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:43:10.947 00:43:10.947 00:43:10.947 CUnit - A unit testing framework for C - Version 2.1-3 00:43:10.947 http://cunit.sourceforge.net/ 00:43:10.947 00:43:10.947 00:43:10.947 Suite: bdevio tests on: Nvme1n1 00:43:10.947 Test: blockdev write read block ...passed 00:43:10.947 Test: blockdev write zeroes read block ...passed 00:43:10.947 Test: blockdev write zeroes read no split ...passed 00:43:10.947 Test: blockdev write zeroes read split ...passed 00:43:11.207 Test: blockdev write zeroes read split partial ...passed 00:43:11.207 Test: blockdev reset ...[2024-12-13 03:54:12.157732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:43:11.207 [2024-12-13 03:54:12.157844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326480 (9): Bad file descriptor 00:43:11.207 [2024-12-13 03:54:12.205342] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:43:11.207 passed 00:43:11.207 Test: blockdev write read 8 blocks ...passed 00:43:11.207 Test: blockdev write read size > 128k ...passed 00:43:11.207 Test: blockdev write read invalid size ...passed 00:43:11.207 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:43:11.207 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:43:11.207 Test: blockdev write read max offset ...passed 00:43:11.207 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:43:11.207 Test: blockdev writev readv 8 blocks ...passed 00:43:11.207 Test: blockdev writev readv 30 x 1block ...passed 00:43:11.207 Test: blockdev writev readv block ...passed 00:43:11.207 Test: blockdev writev readv size > 128k ...passed 00:43:11.207 Test: blockdev writev readv size > 128k in two iovs ...passed 00:43:11.207 Test: blockdev comparev and writev ...[2024-12-13 03:54:12.378750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:11.207 [2024-12-13 03:54:12.378786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:11.207 [2024-12-13 03:54:12.378806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:11.207 [2024-12-13 03:54:12.378818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:11.207 [2024-12-13 03:54:12.379191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:11.207 [2024-12-13 03:54:12.379210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:43:11.207 [2024-12-13 03:54:12.379230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:11.207 [2024-12-13 03:54:12.379241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:43:11.207 [2024-12-13 03:54:12.379604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:11.207 [2024-12-13 03:54:12.379619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:43:11.207 [2024-12-13 03:54:12.379634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:11.207 [2024-12-13 03:54:12.379644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:43:11.207 [2024-12-13 03:54:12.380010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:11.207 [2024-12-13 03:54:12.380026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:43:11.207 [2024-12-13 03:54:12.380043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:11.207 [2024-12-13 03:54:12.380054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:43:11.466 passed 00:43:11.466 Test: blockdev nvme passthru rw ...passed 00:43:11.466 Test: blockdev nvme passthru vendor specific ...[2024-12-13 03:54:12.462358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:11.466 [2024-12-13 03:54:12.462385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:43:11.466 [2024-12-13 03:54:12.462524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:11.466 [2024-12-13 03:54:12.462537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:43:11.466 [2024-12-13 03:54:12.462673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:11.466 [2024-12-13 03:54:12.462686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:43:11.466 [2024-12-13 03:54:12.462817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:11.466 [2024-12-13 03:54:12.462831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:11.466 passed 00:43:11.466 Test: blockdev nvme admin passthru ...passed 00:43:11.466 Test: blockdev copy ...passed 00:43:11.466 00:43:11.466 Run Summary: Type Total Ran Passed Failed Inactive 00:43:11.466 suites 1 1 n/a 0 0 00:43:11.466 tests 23 23 23 0 0 00:43:11.466 asserts 152 152 152 0 n/a 00:43:11.466 00:43:11.466 Elapsed time = 1.317 seconds 00:43:12.404 03:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:12.404 03:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.404 03:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:12.404 03:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.404 03:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:43:12.404 03:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:43:12.404 03:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:12.404 03:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:43:12.404 03:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:12.404 03:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:43:12.404 03:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:12.404 03:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:12.404 rmmod nvme_tcp 00:43:12.404 rmmod nvme_fabrics 00:43:12.404 rmmod nvme_keyring 00:43:12.404 03:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:12.404 03:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:43:12.404 03:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:43:12.404 03:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2996040 ']' 00:43:12.404 03:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2996040 00:43:12.404 03:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2996040 ']' 00:43:12.404 03:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2996040 00:43:12.404 03:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:43:12.404 03:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:12.404 03:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2996040 00:43:12.404 03:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:43:12.404 03:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:43:12.404 03:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2996040' 00:43:12.404 killing process with pid 2996040 00:43:12.404 03:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2996040 00:43:12.404 03:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2996040 00:43:13.783 03:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:13.783 03:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:13.783 03:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:13.783 03:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:43:13.783 03:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:43:13.783 03:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:13.783 03:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:43:13.783 03:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:13.783 03:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:13.783 03:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:13.783 03:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:13.783 03:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:16.315 03:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:16.315 00:43:16.315 real 0m12.580s 00:43:16.315 user 0m18.363s 00:43:16.315 sys 0m5.384s 00:43:16.315 03:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:16.315 03:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:16.315 ************************************ 00:43:16.315 END TEST nvmf_bdevio 00:43:16.315 ************************************ 00:43:16.315 03:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:43:16.315 00:43:16.315 real 4m58.680s 00:43:16.315 user 10m9.411s 00:43:16.315 sys 1m50.818s 00:43:16.315 03:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:16.315 03:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:16.315 ************************************ 00:43:16.315 END TEST nvmf_target_core_interrupt_mode 00:43:16.315 ************************************ 00:43:16.315 03:54:16 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:43:16.315 03:54:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:43:16.315 03:54:16 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:16.315 03:54:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:16.315 ************************************ 00:43:16.315 START TEST nvmf_interrupt 00:43:16.315 ************************************ 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:43:16.315 * Looking for test storage... 00:43:16.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:16.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:16.315 --rc genhtml_branch_coverage=1 00:43:16.315 --rc genhtml_function_coverage=1 00:43:16.315 --rc genhtml_legend=1 00:43:16.315 --rc geninfo_all_blocks=1 00:43:16.315 --rc geninfo_unexecuted_blocks=1 00:43:16.315 00:43:16.315 ' 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:16.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:16.315 --rc genhtml_branch_coverage=1 00:43:16.315 --rc genhtml_function_coverage=1 00:43:16.315 --rc genhtml_legend=1 00:43:16.315 --rc geninfo_all_blocks=1 00:43:16.315 --rc geninfo_unexecuted_blocks=1 00:43:16.315 00:43:16.315 ' 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:16.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:16.315 --rc genhtml_branch_coverage=1 00:43:16.315 --rc genhtml_function_coverage=1 00:43:16.315 --rc genhtml_legend=1 00:43:16.315 --rc geninfo_all_blocks=1 00:43:16.315 --rc geninfo_unexecuted_blocks=1 00:43:16.315 00:43:16.315 ' 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:16.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:16.315 --rc genhtml_branch_coverage=1 00:43:16.315 --rc genhtml_function_coverage=1 00:43:16.315 --rc genhtml_legend=1 00:43:16.315 --rc geninfo_all_blocks=1 00:43:16.315 --rc geninfo_unexecuted_blocks=1 00:43:16.315 00:43:16.315 ' 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:16.315 03:54:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:43:16.316 03:54:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:16.316 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:43:16.316 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:16.316 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:16.316 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:16.316 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:16.316 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:16.316 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:16.316 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:16.316 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:16.316 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:16.316 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:16.316 03:54:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:43:16.316 03:54:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:43:16.316 03:54:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:43:16.316 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:16.316 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:16.316 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:16.316 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:16.316 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:16.316 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:16.316 03:54:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:16.316 03:54:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:16.316 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:16.316 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:16.316 03:54:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:43:16.316 03:54:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:21.593 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:21.593 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:43:21.593 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:21.593 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:21.593 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:21.593 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:21.593 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:21.593 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:43:21.593 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:21.593 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:43:21.593 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:43:21.593 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:43:21.593 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:43:21.593 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:43:21.593 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:43:21.593 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:21.593 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:21.593 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:21.593 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:21.593 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:21.593 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:21.593 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:21.593 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:21.593 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:21.593 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:21.593 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:21.593 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:21.593 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:21.593 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:21.593 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:21.593 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:43:21.594 Found 0000:af:00.0 (0x8086 - 0x159b) 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:43:21.594 Found 0000:af:00.1 (0x8086 - 0x159b) 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:43:21.594 Found net devices under 0000:af:00.0: cvl_0_0 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:43:21.594 Found net devices under 0000:af:00.1: cvl_0_1 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:21.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:21.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:43:21.594 00:43:21.594 --- 10.0.0.2 ping statistics --- 00:43:21.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:21.594 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:21.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:21.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:43:21.594 00:43:21.594 --- 10.0.0.1 ping statistics --- 00:43:21.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:21.594 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3000312 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3000312 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 3000312 ']' 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:21.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:21.594 03:54:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:21.594 [2024-12-13 03:54:22.619063] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:21.594 [2024-12-13 03:54:22.621125] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:43:21.594 [2024-12-13 03:54:22.621207] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:21.594 [2024-12-13 03:54:22.731621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:21.853 [2024-12-13 03:54:22.838699] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:21.853 [2024-12-13 03:54:22.838741] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:21.853 [2024-12-13 03:54:22.838754] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:21.853 [2024-12-13 03:54:22.838765] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:21.853 [2024-12-13 03:54:22.838778] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:21.853 [2024-12-13 03:54:22.840960] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:43:21.853 [2024-12-13 03:54:22.840961] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:43:22.113 [2024-12-13 03:54:23.145749] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:22.113 [2024-12-13 03:54:23.146364] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:22.113 [2024-12-13 03:54:23.146581] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:43:22.372 5000+0 records in 00:43:22.372 5000+0 records out 00:43:22.372 10240000 bytes (10 MB, 9.8 MiB) copied, 0.00920547 s, 1.1 GB/s 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:22.372 AIO0 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:22.372 [2024-12-13 03:54:23.515213] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:22.372 [2024-12-13 03:54:23.545898] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3000312 0 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3000312 0 idle 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3000312 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3000312 -w 256 00:43:22.372 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:22.632 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3000312 root 20 0 20.1t 207360 100608 S 0.0 0.2 0:00.60 reactor_0' 00:43:22.632 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3000312 root 20 0 20.1t 207360 100608 S 0.0 0.2 0:00.60 reactor_0 00:43:22.632 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:22.632 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:22.632 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:22.632 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:22.632 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:22.632 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:22.632 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:22.632 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:22.632 03:54:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:43:22.632 03:54:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3000312 1 00:43:22.632 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3000312 1 idle 00:43:22.632 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3000312 00:43:22.632 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:43:22.632 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:22.632 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:22.632 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:22.632 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:22.632 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:22.632 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:22.632 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:22.632 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:22.632 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3000312 -w 256 00:43:22.632 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:43:22.892 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3000318 root 20 0 20.1t 207360 100608 S 0.0 0.2 0:00.00 reactor_1' 00:43:22.892 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3000318 root 20 0 20.1t 207360 100608 S 0.0 0.2 0:00.00 reactor_1 00:43:22.892 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:22.892 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:22.892 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:22.892 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:22.892 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:22.892 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:22.892 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:22.892 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:22.892 03:54:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:43:22.892 03:54:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3000571 00:43:22.892 03:54:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:43:22.892 03:54:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:43:22.892 03:54:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:43:22.892 03:54:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3000312 0 00:43:22.892 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3000312 0 busy 00:43:22.892 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3000312 00:43:22.892 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:22.892 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:43:22.892 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:43:22.892 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:22.892 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:43:22.892 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:22.892 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:22.892 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:22.892 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:22.892 03:54:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3000312 -w 256 00:43:22.892 03:54:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3000312 root 20 0 20.1t 208128 101376 S 0.0 0.2 0:00.61 reactor_0' 00:43:23.151 03:54:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3000312 root 20 0 20.1t 208128 101376 S 0.0 0.2 0:00.61 reactor_0 00:43:23.151 03:54:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:23.151 03:54:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:23.151 03:54:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:23.151 03:54:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:23.151 03:54:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:43:23.151 03:54:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:43:23.151 03:54:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:43:24.092 03:54:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:43:24.092 03:54:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:24.092 03:54:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3000312 -w 256 00:43:24.092 03:54:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:24.092 03:54:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3000312 root 20 0 20.1t 220416 101376 R 99.9 0.2 0:02.82 reactor_0' 00:43:24.092 03:54:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3000312 root 20 0 20.1t 220416 101376 R 99.9 0.2 0:02.82 reactor_0 00:43:24.092 03:54:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:24.092 03:54:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:24.092 03:54:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:43:24.092 03:54:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:43:24.092 03:54:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:43:24.092 03:54:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:43:24.092 03:54:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:43:24.092 03:54:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:24.092 03:54:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:43:24.092 03:54:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:43:24.092 03:54:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3000312 1 00:43:24.092 03:54:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3000312 1 busy 00:43:24.092 03:54:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3000312 00:43:24.092 03:54:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:43:24.092 03:54:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:43:24.092 03:54:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:43:24.092 03:54:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:24.092 03:54:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:43:24.092 03:54:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:24.092 03:54:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:24.092 03:54:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:24.352 03:54:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3000312 -w 256 00:43:24.352 03:54:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:43:24.352 03:54:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3000318 root 20 0 20.1t 220416 101376 R 99.9 0.2 0:01.29 reactor_1' 00:43:24.352 03:54:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3000318 root 20 0 20.1t 220416 101376 R 99.9 0.2 0:01.29 reactor_1 00:43:24.352 03:54:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:24.352 03:54:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:24.352 03:54:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:43:24.352 03:54:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:43:24.352 03:54:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:43:24.352 03:54:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:43:24.352 03:54:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:43:24.352 03:54:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:24.352 03:54:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3000571 00:43:34.429 Initializing NVMe Controllers 00:43:34.429 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:43:34.429 Controller IO queue size 256, less than required. 00:43:34.429 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:43:34.429 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:43:34.429 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:43:34.429 Initialization complete. Launching workers. 00:43:34.429 ======================================================== 00:43:34.429 Latency(us) 00:43:34.429 Device Information : IOPS MiB/s Average min max 00:43:34.429 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 15241.10 59.54 16806.00 5101.90 21836.15 00:43:34.429 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 15166.00 59.24 16889.95 5310.95 58589.14 00:43:34.429 ======================================================== 00:43:34.429 Total : 30407.10 118.78 16847.87 5101.90 58589.14 00:43:34.429 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3000312 0 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3000312 0 idle 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3000312 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3000312 -w 256 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3000312 root 20 0 20.1t 220416 101376 S 0.0 0.2 0:20.59 reactor_0' 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3000312 root 20 0 20.1t 220416 101376 S 0.0 0.2 0:20.59 reactor_0 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3000312 1 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3000312 1 idle 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3000312 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3000312 -w 256 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3000318 root 20 0 20.1t 220416 101376 S 0.0 0.2 0:10.00 reactor_1' 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3000318 root 20 0 20.1t 220416 101376 S 0.0 0.2 0:10.00 reactor_1 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:34.429 03:54:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:43:34.429 03:54:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:43:34.429 03:54:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:43:34.429 03:54:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:43:34.429 03:54:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:43:34.429 03:54:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3000312 0 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3000312 0 idle 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3000312 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3000312 -w 256 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3000312 root 20 0 20.1t 274944 120576 S 6.7 0.3 0:21.01 reactor_0' 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3000312 root 20 0 20.1t 274944 120576 S 6.7 0.3 0:21.01 reactor_0 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3000312 1 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3000312 1 idle 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3000312 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3000312 -w 256 00:43:36.336 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:43:36.596 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3000318 root 20 0 20.1t 274944 120576 S 0.0 0.3 0:10.18 reactor_1' 00:43:36.596 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3000318 root 20 0 20.1t 274944 120576 S 0.0 0.3 0:10.18 reactor_1 00:43:36.596 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:36.596 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:36.596 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:36.596 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:36.596 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:36.596 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:36.596 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:36.596 03:54:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:36.596 03:54:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:43:37.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:43:37.165 03:54:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:43:37.165 03:54:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:43:37.165 03:54:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:43:37.165 03:54:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:37.165 03:54:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:43:37.165 03:54:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:37.165 03:54:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:43:37.165 03:54:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:43:37.165 03:54:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:43:37.165 03:54:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:37.165 03:54:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:43:37.165 03:54:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:37.165 03:54:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:43:37.165 03:54:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:37.165 03:54:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:37.165 rmmod nvme_tcp 00:43:37.165 rmmod nvme_fabrics 00:43:37.165 rmmod nvme_keyring 00:43:37.165 03:54:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:37.165 03:54:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:43:37.165 03:54:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:43:37.165 03:54:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3000312 ']' 00:43:37.165 03:54:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3000312 00:43:37.165 03:54:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 3000312 ']' 00:43:37.165 03:54:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 3000312 00:43:37.165 03:54:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:43:37.165 03:54:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:37.165 03:54:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3000312 00:43:37.165 03:54:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:37.165 03:54:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:37.165 03:54:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3000312' 00:43:37.165 killing process with pid 3000312 00:43:37.165 03:54:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 3000312 00:43:37.165 03:54:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 3000312 00:43:38.544 03:54:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:38.544 03:54:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:38.544 03:54:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:38.544 03:54:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:43:38.544 03:54:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:43:38.544 03:54:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:38.544 03:54:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:43:38.544 03:54:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:38.544 03:54:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:38.544 03:54:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:38.544 03:54:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:38.544 03:54:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:40.451 03:54:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:40.451 00:43:40.451 real 0m24.511s 00:43:40.451 user 0m41.955s 00:43:40.451 sys 0m8.201s 00:43:40.451 03:54:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:40.451 03:54:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:40.451 ************************************ 00:43:40.451 END TEST nvmf_interrupt 00:43:40.451 ************************************ 00:43:40.451 00:43:40.451 real 37m15.884s 00:43:40.451 user 92m3.774s 00:43:40.451 sys 9m43.401s 00:43:40.451 03:54:41 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:40.451 03:54:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:40.451 ************************************ 00:43:40.451 END TEST nvmf_tcp 00:43:40.451 ************************************ 00:43:40.451 03:54:41 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:43:40.451 03:54:41 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:43:40.451 03:54:41 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:43:40.451 03:54:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:40.451 03:54:41 -- common/autotest_common.sh@10 -- # set +x 00:43:40.451 ************************************ 00:43:40.451 START TEST spdkcli_nvmf_tcp 00:43:40.451 ************************************ 00:43:40.451 03:54:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:43:40.711 * Looking for test storage... 00:43:40.711 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:40.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:40.711 --rc genhtml_branch_coverage=1 00:43:40.711 --rc genhtml_function_coverage=1 00:43:40.711 --rc genhtml_legend=1 00:43:40.711 --rc geninfo_all_blocks=1 00:43:40.711 --rc geninfo_unexecuted_blocks=1 00:43:40.711 00:43:40.711 ' 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:40.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:40.711 --rc genhtml_branch_coverage=1 00:43:40.711 --rc genhtml_function_coverage=1 00:43:40.711 --rc genhtml_legend=1 00:43:40.711 --rc geninfo_all_blocks=1 00:43:40.711 --rc geninfo_unexecuted_blocks=1 00:43:40.711 00:43:40.711 ' 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:40.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:40.711 --rc genhtml_branch_coverage=1 00:43:40.711 --rc genhtml_function_coverage=1 00:43:40.711 --rc genhtml_legend=1 00:43:40.711 --rc geninfo_all_blocks=1 00:43:40.711 --rc geninfo_unexecuted_blocks=1 00:43:40.711 00:43:40.711 ' 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:40.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:40.711 --rc genhtml_branch_coverage=1 00:43:40.711 --rc genhtml_function_coverage=1 00:43:40.711 --rc genhtml_legend=1 00:43:40.711 --rc geninfo_all_blocks=1 00:43:40.711 --rc geninfo_unexecuted_blocks=1 00:43:40.711 00:43:40.711 ' 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:40.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:40.711 03:54:41 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:40.712 03:54:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:43:40.712 03:54:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:43:40.712 03:54:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:43:40.712 03:54:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:43:40.712 03:54:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:40.712 03:54:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:40.712 03:54:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:43:40.712 03:54:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3003436 00:43:40.712 03:54:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:43:40.712 03:54:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3003436 00:43:40.712 03:54:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 3003436 ']' 00:43:40.712 03:54:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:40.712 03:54:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:40.712 03:54:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:40.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:40.712 03:54:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:40.712 03:54:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:40.712 [2024-12-13 03:54:41.890880] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:43:40.712 [2024-12-13 03:54:41.890974] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3003436 ] 00:43:40.971 [2024-12-13 03:54:41.994369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:40.971 [2024-12-13 03:54:42.102172] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:43:40.971 [2024-12-13 03:54:42.102182] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:43:41.540 03:54:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:41.540 03:54:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:43:41.540 03:54:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:43:41.540 03:54:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:41.540 03:54:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:41.540 03:54:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:43:41.540 03:54:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:43:41.540 03:54:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:43:41.540 03:54:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:41.540 03:54:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:41.540 03:54:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:43:41.540 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:43:41.540 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:43:41.540 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:43:41.540 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:43:41.540 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:43:41.540 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:43:41.540 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:43:41.540 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:43:41.540 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:43:41.540 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:43:41.540 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:43:41.540 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:43:41.540 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:43:41.540 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:43:41.540 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:43:41.540 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:43:41.540 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:43:41.540 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:43:41.540 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:43:41.540 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:43:41.540 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:43:41.541 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:43:41.541 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:43:41.541 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:43:41.541 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:43:41.541 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:43:41.541 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:43:41.541 ' 00:43:44.829 [2024-12-13 03:54:45.371466] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:45.396 [2024-12-13 03:54:46.591783] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:43:47.931 [2024-12-13 03:54:48.838895] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:43:49.836 [2024-12-13 03:54:50.769231] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:43:51.214 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:43:51.214 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:43:51.214 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:43:51.214 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:43:51.214 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:43:51.214 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:43:51.214 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:43:51.214 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:43:51.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:43:51.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:43:51.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:43:51.214 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:43:51.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:43:51.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:43:51.214 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:43:51.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:43:51.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:43:51.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:43:51.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:43:51.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:43:51.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:43:51.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:43:51.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:43:51.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:43:51.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:43:51.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:43:51.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:43:51.214 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:43:51.214 03:54:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:43:51.214 03:54:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:51.214 03:54:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:51.214 03:54:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:43:51.214 03:54:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:51.214 03:54:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:51.214 03:54:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:43:51.214 03:54:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:43:51.783 03:54:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:43:51.783 03:54:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:43:51.783 03:54:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:43:51.783 03:54:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:51.783 03:54:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:51.783 03:54:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:43:51.783 03:54:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:51.783 03:54:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:51.783 03:54:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:43:51.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:43:51.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:43:51.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:43:51.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:43:51.783 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:43:51.783 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:43:51.783 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:43:51.783 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:43:51.783 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:43:51.783 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:43:51.783 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:43:51.783 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:43:51.783 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:43:51.783 ' 00:43:58.351 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:43:58.351 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:43:58.351 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:43:58.351 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:43:58.351 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:43:58.351 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:43:58.351 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:43:58.351 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:43:58.351 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:43:58.351 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:43:58.351 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:43:58.351 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:43:58.351 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:43:58.351 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:43:58.351 03:54:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:43:58.351 03:54:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:58.351 03:54:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:58.351 03:54:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3003436 00:43:58.351 03:54:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3003436 ']' 00:43:58.351 03:54:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3003436 00:43:58.351 03:54:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:43:58.351 03:54:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:58.351 03:54:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3003436 00:43:58.351 03:54:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:58.351 03:54:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:58.351 03:54:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3003436' 00:43:58.351 killing process with pid 3003436 00:43:58.351 03:54:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 3003436 00:43:58.351 03:54:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 3003436 00:43:58.611 03:54:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:43:58.611 03:54:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:43:58.611 03:54:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3003436 ']' 00:43:58.611 03:54:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3003436 00:43:58.611 03:54:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3003436 ']' 00:43:58.611 03:54:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3003436 00:43:58.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3003436) - No such process 00:43:58.611 03:54:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 3003436 is not found' 00:43:58.611 Process with pid 3003436 is not found 00:43:58.611 03:54:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:43:58.611 03:54:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:43:58.611 03:54:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:43:58.611 00:43:58.611 real 0m18.082s 00:43:58.611 user 0m37.096s 00:43:58.611 sys 0m0.851s 00:43:58.611 03:54:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:58.611 03:54:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:58.611 ************************************ 00:43:58.611 END TEST spdkcli_nvmf_tcp 00:43:58.611 ************************************ 00:43:58.611 03:54:59 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:43:58.611 03:54:59 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:43:58.611 03:54:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:58.611 03:54:59 -- common/autotest_common.sh@10 -- # set +x 00:43:58.611 ************************************ 00:43:58.611 START TEST nvmf_identify_passthru 00:43:58.611 ************************************ 00:43:58.611 03:54:59 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:43:58.871 * Looking for test storage... 00:43:58.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:58.871 03:54:59 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:58.871 03:54:59 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:43:58.871 03:54:59 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:58.871 03:54:59 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:58.871 03:54:59 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:58.871 03:54:59 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:58.871 03:54:59 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:58.871 03:54:59 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:43:58.871 03:54:59 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:43:58.871 03:54:59 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:43:58.871 03:54:59 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:43:58.871 03:54:59 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:43:58.872 03:54:59 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:43:58.872 03:54:59 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:43:58.872 03:54:59 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:58.872 03:54:59 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:43:58.872 03:54:59 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:43:58.872 03:54:59 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:58.872 03:54:59 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:58.872 03:54:59 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:43:58.872 03:54:59 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:43:58.872 03:54:59 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:58.872 03:54:59 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:43:58.872 03:54:59 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:43:58.872 03:54:59 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:43:58.872 03:54:59 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:43:58.872 03:54:59 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:58.872 03:54:59 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:43:58.872 03:54:59 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:43:58.872 03:54:59 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:58.872 03:54:59 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:58.872 03:54:59 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:43:58.872 03:54:59 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:58.872 03:54:59 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:58.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:58.872 --rc genhtml_branch_coverage=1 00:43:58.872 --rc genhtml_function_coverage=1 00:43:58.872 --rc genhtml_legend=1 00:43:58.872 --rc geninfo_all_blocks=1 00:43:58.872 --rc geninfo_unexecuted_blocks=1 00:43:58.872 00:43:58.872 ' 00:43:58.872 03:54:59 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:58.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:58.872 --rc genhtml_branch_coverage=1 00:43:58.872 --rc genhtml_function_coverage=1 00:43:58.872 --rc genhtml_legend=1 00:43:58.872 --rc geninfo_all_blocks=1 00:43:58.872 --rc geninfo_unexecuted_blocks=1 00:43:58.872 00:43:58.872 ' 00:43:58.872 03:54:59 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:58.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:58.872 --rc genhtml_branch_coverage=1 00:43:58.872 --rc genhtml_function_coverage=1 00:43:58.872 --rc genhtml_legend=1 00:43:58.872 --rc geninfo_all_blocks=1 00:43:58.872 --rc geninfo_unexecuted_blocks=1 00:43:58.872 00:43:58.872 ' 00:43:58.872 03:54:59 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:58.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:58.872 --rc genhtml_branch_coverage=1 00:43:58.872 --rc genhtml_function_coverage=1 00:43:58.872 --rc genhtml_legend=1 00:43:58.872 --rc geninfo_all_blocks=1 00:43:58.872 --rc geninfo_unexecuted_blocks=1 00:43:58.872 00:43:58.872 ' 00:43:58.872 03:54:59 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:58.872 03:54:59 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:43:58.872 03:54:59 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:58.872 03:54:59 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:58.872 03:54:59 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:58.872 03:54:59 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:58.872 03:54:59 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:58.872 03:54:59 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:58.872 03:54:59 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:58.872 03:54:59 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:58.872 03:54:59 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:58.872 03:54:59 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:58.872 03:54:59 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:43:58.872 03:54:59 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:43:58.872 03:54:59 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:58.872 03:54:59 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:58.872 03:54:59 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:58.872 03:54:59 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:58.872 03:54:59 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:58.872 03:54:59 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:43:58.872 03:54:59 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:58.872 03:54:59 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:58.872 03:54:59 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:58.872 03:54:59 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:58.872 03:54:59 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:58.872 03:54:59 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:58.872 03:54:59 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:43:58.872 03:54:59 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:58.872 03:54:59 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:43:58.872 03:54:59 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:58.872 03:54:59 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:58.872 03:54:59 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:58.872 03:54:59 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:58.872 03:54:59 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:58.872 03:54:59 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:58.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:58.872 03:54:59 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:58.872 03:54:59 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:58.872 03:54:59 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:58.872 03:54:59 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:58.872 03:54:59 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:43:58.872 03:54:59 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:58.872 03:54:59 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:58.872 03:54:59 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:58.872 03:54:59 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:58.872 03:54:59 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:58.872 03:54:59 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:58.872 03:54:59 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:43:58.872 03:54:59 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:58.872 03:54:59 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:43:58.872 03:54:59 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:58.872 03:54:59 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:58.872 03:54:59 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:58.872 03:54:59 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:58.872 03:54:59 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:58.872 03:54:59 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:58.872 03:54:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:58.872 03:54:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:58.872 03:54:59 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:58.873 03:54:59 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:58.873 03:54:59 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:43:58.873 03:54:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:44:04.148 Found 0000:af:00.0 (0x8086 - 0x159b) 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:44:04.148 Found 0000:af:00.1 (0x8086 - 0x159b) 00:44:04.148 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:44:04.149 Found net devices under 0000:af:00.0: cvl_0_0 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:44:04.149 Found net devices under 0000:af:00.1: cvl_0_1 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:04.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:04.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:44:04.149 00:44:04.149 --- 10.0.0.2 ping statistics --- 00:44:04.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:04.149 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:04.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:04.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:44:04.149 00:44:04.149 --- 10.0.0.1 ping statistics --- 00:44:04.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:04.149 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:04.149 03:55:05 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:04.408 03:55:05 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:44:04.408 03:55:05 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:04.408 03:55:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:04.408 03:55:05 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:44:04.408 03:55:05 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:44:04.408 03:55:05 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:44:04.408 03:55:05 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:44:04.408 03:55:05 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:44:04.408 03:55:05 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:44:04.408 03:55:05 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:44:04.408 03:55:05 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:44:04.408 03:55:05 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:44:04.408 03:55:05 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:44:04.408 03:55:05 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:44:04.408 03:55:05 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:44:04.408 03:55:05 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:44:04.408 03:55:05 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:44:04.408 03:55:05 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:44:04.408 03:55:05 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:44:04.408 03:55:05 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:44:04.408 03:55:05 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:44:08.600 03:55:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ7244049A1P0FGN 00:44:08.600 03:55:09 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:44:08.600 03:55:09 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:44:08.600 03:55:09 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:44:13.874 03:55:14 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:44:13.874 03:55:14 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:44:13.874 03:55:14 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:13.874 03:55:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:13.874 03:55:14 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:44:13.874 03:55:14 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:13.874 03:55:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:13.874 03:55:14 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3010543 00:44:13.874 03:55:14 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:44:13.874 03:55:14 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3010543 00:44:13.874 03:55:14 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 3010543 ']' 00:44:13.874 03:55:14 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:13.874 03:55:14 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:44:13.874 03:55:14 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:13.874 03:55:14 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:13.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:13.874 03:55:14 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:13.874 03:55:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:13.874 [2024-12-13 03:55:14.156027] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:44:13.874 [2024-12-13 03:55:14.156116] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:13.874 [2024-12-13 03:55:14.273609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:13.874 [2024-12-13 03:55:14.380615] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:13.874 [2024-12-13 03:55:14.380664] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:13.874 [2024-12-13 03:55:14.380674] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:13.874 [2024-12-13 03:55:14.380701] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:13.874 [2024-12-13 03:55:14.380710] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:13.874 [2024-12-13 03:55:14.383114] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:44:13.875 [2024-12-13 03:55:14.383191] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:44:13.875 [2024-12-13 03:55:14.383297] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:44:13.875 [2024-12-13 03:55:14.383307] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:44:13.875 03:55:14 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:13.875 03:55:14 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:44:13.875 03:55:14 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:44:13.875 03:55:14 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:13.875 03:55:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:13.875 INFO: Log level set to 20 00:44:13.875 INFO: Requests: 00:44:13.875 { 00:44:13.875 "jsonrpc": "2.0", 00:44:13.875 "method": "nvmf_set_config", 00:44:13.875 "id": 1, 00:44:13.875 "params": { 00:44:13.875 "admin_cmd_passthru": { 00:44:13.875 "identify_ctrlr": true 00:44:13.875 } 00:44:13.875 } 00:44:13.875 } 00:44:13.875 00:44:13.875 INFO: response: 00:44:13.875 { 00:44:13.875 "jsonrpc": "2.0", 00:44:13.875 "id": 1, 00:44:13.875 "result": true 00:44:13.875 } 00:44:13.875 00:44:13.875 03:55:14 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:13.875 03:55:14 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:44:13.875 03:55:14 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:13.875 03:55:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:13.875 INFO: Setting log level to 20 00:44:13.875 INFO: Setting log level to 20 00:44:13.875 INFO: Log level set to 20 00:44:13.875 INFO: Log level set to 20 00:44:13.875 INFO: Requests: 00:44:13.875 { 00:44:13.875 "jsonrpc": "2.0", 00:44:13.875 "method": "framework_start_init", 00:44:13.875 "id": 1 00:44:13.875 } 00:44:13.875 00:44:13.875 INFO: Requests: 00:44:13.875 { 00:44:13.875 "jsonrpc": "2.0", 00:44:13.875 "method": "framework_start_init", 00:44:13.875 "id": 1 00:44:13.875 } 00:44:13.875 00:44:14.134 [2024-12-13 03:55:15.306985] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:44:14.134 INFO: response: 00:44:14.134 { 00:44:14.134 "jsonrpc": "2.0", 00:44:14.134 "id": 1, 00:44:14.134 "result": true 00:44:14.134 } 00:44:14.134 00:44:14.134 INFO: response: 00:44:14.134 { 00:44:14.134 "jsonrpc": "2.0", 00:44:14.134 "id": 1, 00:44:14.134 "result": true 00:44:14.134 } 00:44:14.134 00:44:14.134 03:55:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:14.134 03:55:15 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:14.134 03:55:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:14.134 03:55:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:14.134 INFO: Setting log level to 40 00:44:14.134 INFO: Setting log level to 40 00:44:14.134 INFO: Setting log level to 40 00:44:14.134 [2024-12-13 03:55:15.323565] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:14.392 03:55:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:14.392 03:55:15 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:44:14.392 03:55:15 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:14.392 03:55:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:14.392 03:55:15 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:44:14.392 03:55:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:14.392 03:55:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:17.683 Nvme0n1 00:44:17.683 03:55:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:17.683 03:55:18 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:44:17.683 03:55:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:17.683 03:55:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:17.683 03:55:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:17.683 03:55:18 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:44:17.683 03:55:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:17.683 03:55:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:17.683 03:55:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:17.683 03:55:18 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:17.683 03:55:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:17.683 03:55:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:17.683 [2024-12-13 03:55:18.295856] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:17.683 03:55:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:17.683 03:55:18 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:44:17.683 03:55:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:17.683 03:55:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:17.683 [ 00:44:17.683 { 00:44:17.683 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:44:17.683 "subtype": "Discovery", 00:44:17.683 "listen_addresses": [], 00:44:17.683 "allow_any_host": true, 00:44:17.683 "hosts": [] 00:44:17.683 }, 00:44:17.683 { 00:44:17.683 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:44:17.683 "subtype": "NVMe", 00:44:17.683 "listen_addresses": [ 00:44:17.683 { 00:44:17.683 "trtype": "TCP", 00:44:17.683 "adrfam": "IPv4", 00:44:17.683 "traddr": "10.0.0.2", 00:44:17.683 "trsvcid": "4420" 00:44:17.683 } 00:44:17.683 ], 00:44:17.683 "allow_any_host": true, 00:44:17.683 "hosts": [], 00:44:17.683 "serial_number": "SPDK00000000000001", 00:44:17.683 "model_number": "SPDK bdev Controller", 00:44:17.683 "max_namespaces": 1, 00:44:17.683 "min_cntlid": 1, 00:44:17.683 "max_cntlid": 65519, 00:44:17.683 "namespaces": [ 00:44:17.683 { 00:44:17.683 "nsid": 1, 00:44:17.683 "bdev_name": "Nvme0n1", 00:44:17.683 "name": "Nvme0n1", 00:44:17.683 "nguid": "9E3EFCA2E6A541FA8A530C7467470847", 00:44:17.683 "uuid": "9e3efca2-e6a5-41fa-8a53-0c7467470847" 00:44:17.683 } 00:44:17.683 ] 00:44:17.683 } 00:44:17.683 ] 00:44:17.683 03:55:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:17.683 03:55:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:44:17.683 03:55:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:44:17.683 03:55:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:44:17.683 03:55:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ7244049A1P0FGN 00:44:17.683 03:55:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:44:17.683 03:55:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:44:17.683 03:55:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:44:17.683 03:55:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:44:17.683 03:55:18 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ7244049A1P0FGN '!=' BTLJ7244049A1P0FGN ']' 00:44:17.683 03:55:18 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:44:17.683 03:55:18 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:17.683 03:55:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:17.683 03:55:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:17.683 03:55:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:17.683 03:55:18 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:44:17.683 03:55:18 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:44:17.683 03:55:18 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:17.683 03:55:18 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:44:17.683 03:55:18 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:17.683 03:55:18 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:44:17.683 03:55:18 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:17.683 03:55:18 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:17.683 rmmod nvme_tcp 00:44:17.683 rmmod nvme_fabrics 00:44:17.683 rmmod nvme_keyring 00:44:17.683 03:55:18 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:17.683 03:55:18 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:44:17.683 03:55:18 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:44:17.683 03:55:18 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3010543 ']' 00:44:17.683 03:55:18 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3010543 00:44:17.683 03:55:18 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 3010543 ']' 00:44:17.683 03:55:18 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 3010543 00:44:17.683 03:55:18 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:44:17.683 03:55:18 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:17.683 03:55:18 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3010543 00:44:17.683 03:55:18 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:17.683 03:55:18 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:17.683 03:55:18 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3010543' 00:44:17.683 killing process with pid 3010543 00:44:17.683 03:55:18 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 3010543 00:44:17.683 03:55:18 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 3010543 00:44:20.218 03:55:21 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:44:20.218 03:55:21 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:20.218 03:55:21 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:20.218 03:55:21 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:44:20.218 03:55:21 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:44:20.218 03:55:21 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:44:20.218 03:55:21 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:20.218 03:55:21 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:20.218 03:55:21 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:20.218 03:55:21 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:20.218 03:55:21 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:20.218 03:55:21 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:22.755 03:55:23 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:22.755 00:44:22.755 real 0m23.625s 00:44:22.755 user 0m33.376s 00:44:22.755 sys 0m6.119s 00:44:22.755 03:55:23 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:22.755 03:55:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:22.755 ************************************ 00:44:22.755 END TEST nvmf_identify_passthru 00:44:22.755 ************************************ 00:44:22.755 03:55:23 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:44:22.755 03:55:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:22.755 03:55:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:22.755 03:55:23 -- common/autotest_common.sh@10 -- # set +x 00:44:22.755 ************************************ 00:44:22.755 START TEST nvmf_dif 00:44:22.755 ************************************ 00:44:22.755 03:55:23 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:44:22.755 * Looking for test storage... 00:44:22.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:22.755 03:55:23 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:22.755 03:55:23 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:44:22.755 03:55:23 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:22.755 03:55:23 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:22.755 03:55:23 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:22.755 03:55:23 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:22.755 03:55:23 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:22.755 03:55:23 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:44:22.755 03:55:23 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:44:22.755 03:55:23 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:44:22.755 03:55:23 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:44:22.755 03:55:23 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:44:22.755 03:55:23 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:44:22.755 03:55:23 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:44:22.755 03:55:23 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:22.755 03:55:23 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:44:22.755 03:55:23 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:44:22.755 03:55:23 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:22.755 03:55:23 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:22.755 03:55:23 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:44:22.755 03:55:23 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:44:22.755 03:55:23 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:22.755 03:55:23 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:44:22.755 03:55:23 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:44:22.755 03:55:23 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:44:22.755 03:55:23 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:44:22.755 03:55:23 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:22.755 03:55:23 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:44:22.755 03:55:23 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:44:22.755 03:55:23 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:22.755 03:55:23 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:22.755 03:55:23 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:44:22.755 03:55:23 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:22.755 03:55:23 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:22.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:22.755 --rc genhtml_branch_coverage=1 00:44:22.755 --rc genhtml_function_coverage=1 00:44:22.755 --rc genhtml_legend=1 00:44:22.755 --rc geninfo_all_blocks=1 00:44:22.755 --rc geninfo_unexecuted_blocks=1 00:44:22.755 00:44:22.755 ' 00:44:22.755 03:55:23 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:22.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:22.755 --rc genhtml_branch_coverage=1 00:44:22.755 --rc genhtml_function_coverage=1 00:44:22.755 --rc genhtml_legend=1 00:44:22.755 --rc geninfo_all_blocks=1 00:44:22.755 --rc geninfo_unexecuted_blocks=1 00:44:22.755 00:44:22.755 ' 00:44:22.755 03:55:23 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:22.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:22.755 --rc genhtml_branch_coverage=1 00:44:22.755 --rc genhtml_function_coverage=1 00:44:22.755 --rc genhtml_legend=1 00:44:22.755 --rc geninfo_all_blocks=1 00:44:22.755 --rc geninfo_unexecuted_blocks=1 00:44:22.755 00:44:22.755 ' 00:44:22.755 03:55:23 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:22.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:22.755 --rc genhtml_branch_coverage=1 00:44:22.755 --rc genhtml_function_coverage=1 00:44:22.755 --rc genhtml_legend=1 00:44:22.755 --rc geninfo_all_blocks=1 00:44:22.755 --rc geninfo_unexecuted_blocks=1 00:44:22.755 00:44:22.755 ' 00:44:22.755 03:55:23 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:22.755 03:55:23 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:44:22.755 03:55:23 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:22.755 03:55:23 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:22.755 03:55:23 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:22.755 03:55:23 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:22.755 03:55:23 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:22.755 03:55:23 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:22.755 03:55:23 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:22.755 03:55:23 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:22.755 03:55:23 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:22.755 03:55:23 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:22.755 03:55:23 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:44:22.755 03:55:23 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:44:22.755 03:55:23 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:22.755 03:55:23 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:22.755 03:55:23 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:22.755 03:55:23 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:22.755 03:55:23 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:22.755 03:55:23 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:44:22.755 03:55:23 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:22.755 03:55:23 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:22.755 03:55:23 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:22.756 03:55:23 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:22.756 03:55:23 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:22.756 03:55:23 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:22.756 03:55:23 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:44:22.756 03:55:23 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:22.756 03:55:23 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:44:22.756 03:55:23 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:22.756 03:55:23 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:22.756 03:55:23 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:22.756 03:55:23 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:22.756 03:55:23 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:22.756 03:55:23 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:22.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:22.756 03:55:23 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:22.756 03:55:23 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:22.756 03:55:23 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:22.756 03:55:23 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:44:22.756 03:55:23 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:44:22.756 03:55:23 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:44:22.756 03:55:23 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:44:22.756 03:55:23 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:44:22.756 03:55:23 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:22.756 03:55:23 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:22.756 03:55:23 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:22.756 03:55:23 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:22.756 03:55:23 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:22.756 03:55:23 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:22.756 03:55:23 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:22.756 03:55:23 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:22.756 03:55:23 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:44:22.756 03:55:23 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:44:22.756 03:55:23 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:44:22.756 03:55:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:44:28.029 Found 0000:af:00.0 (0x8086 - 0x159b) 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:44:28.029 Found 0000:af:00.1 (0x8086 - 0x159b) 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:44:28.029 Found net devices under 0000:af:00.0: cvl_0_0 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:44:28.029 Found net devices under 0000:af:00.1: cvl_0_1 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:28.029 03:55:28 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:44:28.030 03:55:28 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:44:28.030 03:55:28 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:44:28.030 03:55:28 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:44:28.030 03:55:28 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:44:28.030 03:55:28 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:28.030 03:55:28 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:28.030 03:55:28 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:28.030 03:55:28 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:28.030 03:55:28 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:28.030 03:55:28 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:28.030 03:55:28 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:28.030 03:55:28 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:28.030 03:55:28 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:28.030 03:55:28 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:28.030 03:55:28 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:28.030 03:55:28 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:28.030 03:55:28 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:28.030 03:55:28 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:28.030 03:55:28 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:28.030 03:55:28 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:28.030 03:55:28 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:28.030 03:55:28 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:28.030 03:55:28 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:28.030 03:55:29 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:28.030 03:55:29 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:28.030 03:55:29 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:28.030 03:55:29 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:28.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:28.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:44:28.030 00:44:28.030 --- 10.0.0.2 ping statistics --- 00:44:28.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:28.030 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:44:28.030 03:55:29 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:28.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:28.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:44:28.030 00:44:28.030 --- 10.0.0.1 ping statistics --- 00:44:28.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:28.030 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:44:28.030 03:55:29 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:28.030 03:55:29 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:44:28.030 03:55:29 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:44:28.030 03:55:29 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:44:30.566 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:44:30.566 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:44:30.566 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:44:30.566 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:44:30.566 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:44:30.566 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:44:30.566 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:44:30.566 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:44:30.566 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:44:30.566 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:44:30.566 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:44:30.566 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:44:30.566 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:44:30.566 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:44:30.566 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:44:30.566 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:44:30.566 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:44:30.566 03:55:31 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:30.566 03:55:31 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:30.566 03:55:31 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:30.566 03:55:31 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:30.566 03:55:31 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:30.566 03:55:31 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:30.566 03:55:31 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:44:30.566 03:55:31 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:44:30.566 03:55:31 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:44:30.566 03:55:31 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:30.566 03:55:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:30.566 03:55:31 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3016222 00:44:30.566 03:55:31 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:44:30.566 03:55:31 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3016222 00:44:30.566 03:55:31 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 3016222 ']' 00:44:30.566 03:55:31 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:30.566 03:55:31 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:30.566 03:55:31 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:30.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:30.566 03:55:31 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:30.566 03:55:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:30.566 [2024-12-13 03:55:31.675357] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:44:30.567 [2024-12-13 03:55:31.675449] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:30.825 [2024-12-13 03:55:31.792133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:30.825 [2024-12-13 03:55:31.899838] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:30.825 [2024-12-13 03:55:31.899880] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:30.825 [2024-12-13 03:55:31.899892] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:30.825 [2024-12-13 03:55:31.899902] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:30.825 [2024-12-13 03:55:31.899909] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:30.825 [2024-12-13 03:55:31.901255] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:44:31.393 03:55:32 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:31.393 03:55:32 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:44:31.393 03:55:32 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:44:31.393 03:55:32 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:31.393 03:55:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:31.393 03:55:32 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:31.393 03:55:32 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:44:31.393 03:55:32 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:44:31.393 03:55:32 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:31.393 03:55:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:31.393 [2024-12-13 03:55:32.512531] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:31.393 03:55:32 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:31.393 03:55:32 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:44:31.393 03:55:32 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:31.393 03:55:32 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:31.393 03:55:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:31.393 ************************************ 00:44:31.393 START TEST fio_dif_1_default 00:44:31.393 ************************************ 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:31.393 bdev_null0 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:31.393 [2024-12-13 03:55:32.584859] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:31.393 { 00:44:31.393 "params": { 00:44:31.393 "name": "Nvme$subsystem", 00:44:31.393 "trtype": "$TEST_TRANSPORT", 00:44:31.393 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:31.393 "adrfam": "ipv4", 00:44:31.393 "trsvcid": "$NVMF_PORT", 00:44:31.393 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:31.393 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:31.393 "hdgst": ${hdgst:-false}, 00:44:31.393 "ddgst": ${ddgst:-false} 00:44:31.393 }, 00:44:31.393 "method": "bdev_nvme_attach_controller" 00:44:31.393 } 00:44:31.393 EOF 00:44:31.393 )") 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:44:31.393 03:55:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:44:31.394 03:55:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:31.394 03:55:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:44:31.394 03:55:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:44:31.394 03:55:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:31.394 03:55:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:44:31.653 03:55:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:44:31.653 03:55:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:31.653 "params": { 00:44:31.653 "name": "Nvme0", 00:44:31.653 "trtype": "tcp", 00:44:31.653 "traddr": "10.0.0.2", 00:44:31.653 "adrfam": "ipv4", 00:44:31.653 "trsvcid": "4420", 00:44:31.653 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:31.653 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:31.653 "hdgst": false, 00:44:31.653 "ddgst": false 00:44:31.653 }, 00:44:31.653 "method": "bdev_nvme_attach_controller" 00:44:31.653 }' 00:44:31.653 03:55:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:44:31.653 03:55:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:44:31.653 03:55:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # break 00:44:31.653 03:55:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:31.653 03:55:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:31.912 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:44:31.912 fio-3.35 00:44:31.912 Starting 1 thread 00:44:44.234 00:44:44.234 filename0: (groupid=0, jobs=1): err= 0: pid=3016702: Fri Dec 13 03:55:43 2024 00:44:44.234 read: IOPS=95, BW=383KiB/s (392kB/s)(3840KiB/10021msec) 00:44:44.234 slat (nsec): min=7027, max=42010, avg=9071.12, stdev=3148.80 00:44:44.234 clat (usec): min=40857, max=42910, avg=41726.28, stdev=431.79 00:44:44.234 lat (usec): min=40864, max=42952, avg=41735.35, stdev=431.89 00:44:44.234 clat percentiles (usec): 00:44:44.234 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:44:44.234 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:44:44.234 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:44:44.234 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:44:44.234 | 99.99th=[42730] 00:44:44.234 bw ( KiB/s): min= 352, max= 384, per=99.69%, avg=382.40, stdev= 7.16, samples=20 00:44:44.234 iops : min= 88, max= 96, avg=95.60, stdev= 1.79, samples=20 00:44:44.234 lat (msec) : 50=100.00% 00:44:44.234 cpu : usr=93.64%, sys=6.00%, ctx=16, majf=0, minf=1632 00:44:44.234 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:44.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:44.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:44.234 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:44.234 latency : target=0, window=0, percentile=100.00%, depth=4 00:44:44.234 00:44:44.234 Run status group 0 (all jobs): 00:44:44.234 READ: bw=383KiB/s (392kB/s), 383KiB/s-383KiB/s (392kB/s-392kB/s), io=3840KiB (3932kB), run=10021-10021msec 00:44:44.234 ----------------------------------------------------- 00:44:44.234 Suppressions used: 00:44:44.234 count bytes template 00:44:44.234 1 8 /usr/src/fio/parse.c 00:44:44.234 1 8 libtcmalloc_minimal.so 00:44:44.234 1 904 libcrypto.so 00:44:44.234 ----------------------------------------------------- 00:44:44.234 00:44:44.234 03:55:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:44:44.234 03:55:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:44:44.234 03:55:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:44:44.234 03:55:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:44.234 03:55:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:44:44.234 03:55:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:44.234 03:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:44.234 03:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:44.234 03:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:44.234 03:55:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:44.234 03:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:44.234 03:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:44.234 03:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:44.234 00:44:44.234 real 0m12.584s 00:44:44.234 user 0m17.733s 00:44:44.234 sys 0m1.142s 00:44:44.234 03:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:44.234 03:55:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:44.234 ************************************ 00:44:44.234 END TEST fio_dif_1_default 00:44:44.234 ************************************ 00:44:44.234 03:55:45 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:44:44.234 03:55:45 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:44.234 03:55:45 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:44.234 03:55:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:44.234 ************************************ 00:44:44.234 START TEST fio_dif_1_multi_subsystems 00:44:44.234 ************************************ 00:44:44.234 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:44:44.234 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:44:44.234 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:44:44.234 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:44:44.234 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:44.235 bdev_null0 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:44.235 [2024-12-13 03:55:45.235528] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:44.235 bdev_null1 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:44.235 { 00:44:44.235 "params": { 00:44:44.235 "name": "Nvme$subsystem", 00:44:44.235 "trtype": "$TEST_TRANSPORT", 00:44:44.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:44.235 "adrfam": "ipv4", 00:44:44.235 "trsvcid": "$NVMF_PORT", 00:44:44.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:44.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:44.235 "hdgst": ${hdgst:-false}, 00:44:44.235 "ddgst": ${ddgst:-false} 00:44:44.235 }, 00:44:44.235 "method": "bdev_nvme_attach_controller" 00:44:44.235 } 00:44:44.235 EOF 00:44:44.235 )") 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:44.235 { 00:44:44.235 "params": { 00:44:44.235 "name": "Nvme$subsystem", 00:44:44.235 "trtype": "$TEST_TRANSPORT", 00:44:44.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:44.235 "adrfam": "ipv4", 00:44:44.235 "trsvcid": "$NVMF_PORT", 00:44:44.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:44.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:44.235 "hdgst": ${hdgst:-false}, 00:44:44.235 "ddgst": ${ddgst:-false} 00:44:44.235 }, 00:44:44.235 "method": "bdev_nvme_attach_controller" 00:44:44.235 } 00:44:44.235 EOF 00:44:44.235 )") 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:44.235 "params": { 00:44:44.235 "name": "Nvme0", 00:44:44.235 "trtype": "tcp", 00:44:44.235 "traddr": "10.0.0.2", 00:44:44.235 "adrfam": "ipv4", 00:44:44.235 "trsvcid": "4420", 00:44:44.235 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:44.235 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:44.235 "hdgst": false, 00:44:44.235 "ddgst": false 00:44:44.235 }, 00:44:44.235 "method": "bdev_nvme_attach_controller" 00:44:44.235 },{ 00:44:44.235 "params": { 00:44:44.235 "name": "Nvme1", 00:44:44.235 "trtype": "tcp", 00:44:44.235 "traddr": "10.0.0.2", 00:44:44.235 "adrfam": "ipv4", 00:44:44.235 "trsvcid": "4420", 00:44:44.235 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:44.235 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:44.235 "hdgst": false, 00:44:44.235 "ddgst": false 00:44:44.235 }, 00:44:44.235 "method": "bdev_nvme_attach_controller" 00:44:44.235 }' 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # break 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:44.235 03:55:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:44.494 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:44:44.494 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:44:44.494 fio-3.35 00:44:44.494 Starting 2 threads 00:44:56.703 00:44:56.703 filename0: (groupid=0, jobs=1): err= 0: pid=3018841: Fri Dec 13 03:55:56 2024 00:44:56.703 read: IOPS=189, BW=758KiB/s (777kB/s)(7600KiB/10021msec) 00:44:56.703 slat (nsec): min=6899, max=27163, avg=8370.07, stdev=2039.38 00:44:56.703 clat (usec): min=481, max=42475, avg=21070.82, stdev=20537.05 00:44:56.703 lat (usec): min=488, max=42483, avg=21079.19, stdev=20536.50 00:44:56.703 clat percentiles (usec): 00:44:56.703 | 1.00th=[ 490], 5.00th=[ 506], 10.00th=[ 519], 20.00th=[ 537], 00:44:56.703 | 30.00th=[ 553], 40.00th=[ 619], 50.00th=[ 6194], 60.00th=[41157], 00:44:56.703 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:44:56.703 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:44:56.703 | 99.99th=[42730] 00:44:56.703 bw ( KiB/s): min= 704, max= 768, per=66.09%, avg=758.40, stdev=23.45, samples=20 00:44:56.703 iops : min= 176, max= 192, avg=189.60, stdev= 5.86, samples=20 00:44:56.703 lat (usec) : 500=3.79%, 750=45.42%, 1000=0.68% 00:44:56.703 lat (msec) : 10=0.21%, 50=49.89% 00:44:56.703 cpu : usr=96.76%, sys=2.95%, ctx=12, majf=0, minf=1632 00:44:56.703 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:56.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:56.703 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:56.703 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:56.704 latency : target=0, window=0, percentile=100.00%, depth=4 00:44:56.704 filename1: (groupid=0, jobs=1): err= 0: pid=3018842: Fri Dec 13 03:55:56 2024 00:44:56.704 read: IOPS=97, BW=389KiB/s (399kB/s)(3904KiB/10030msec) 00:44:56.704 slat (nsec): min=6970, max=26313, avg=9047.28, stdev=2760.45 00:44:56.704 clat (usec): min=40731, max=48072, avg=41075.71, stdev=541.17 00:44:56.704 lat (usec): min=40738, max=48098, avg=41084.76, stdev=541.60 00:44:56.704 clat percentiles (usec): 00:44:56.704 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:44:56.704 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:44:56.704 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:44:56.704 | 99.00th=[43254], 99.50th=[43254], 99.90th=[47973], 99.95th=[47973], 00:44:56.704 | 99.99th=[47973] 00:44:56.704 bw ( KiB/s): min= 384, max= 416, per=33.83%, avg=388.80, stdev=11.72, samples=20 00:44:56.704 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:44:56.704 lat (msec) : 50=100.00% 00:44:56.704 cpu : usr=96.80%, sys=2.91%, ctx=14, majf=0, minf=1634 00:44:56.704 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:56.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:56.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:56.704 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:56.704 latency : target=0, window=0, percentile=100.00%, depth=4 00:44:56.704 00:44:56.704 Run status group 0 (all jobs): 00:44:56.704 READ: bw=1147KiB/s (1174kB/s), 389KiB/s-758KiB/s (399kB/s-777kB/s), io=11.2MiB (11.8MB), run=10021-10030msec 00:44:56.963 ----------------------------------------------------- 00:44:56.963 Suppressions used: 00:44:56.963 count bytes template 00:44:56.963 2 16 /usr/src/fio/parse.c 00:44:56.963 1 8 libtcmalloc_minimal.so 00:44:56.963 1 904 libcrypto.so 00:44:56.963 ----------------------------------------------------- 00:44:56.963 00:44:56.963 03:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:44:56.963 03:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:44:56.963 03:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:44:56.963 03:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:56.963 03:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:44:56.963 03:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:56.963 03:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.963 03:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:56.963 03:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.963 03:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:56.963 03:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.963 03:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:56.963 03:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.963 03:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:44:56.963 03:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:44:56.963 03:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:44:56.963 03:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:56.963 03:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.963 03:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:56.963 03:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.963 03:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:44:56.963 03:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.963 03:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:56.963 03:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.963 00:44:56.963 real 0m12.868s 00:44:56.963 user 0m27.772s 00:44:56.963 sys 0m1.063s 00:44:56.963 03:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:56.963 03:55:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:56.963 ************************************ 00:44:56.963 END TEST fio_dif_1_multi_subsystems 00:44:56.963 ************************************ 00:44:56.963 03:55:58 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:44:56.963 03:55:58 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:56.963 03:55:58 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:56.963 03:55:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:56.963 ************************************ 00:44:56.963 START TEST fio_dif_rand_params 00:44:56.963 ************************************ 00:44:56.963 03:55:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:44:56.963 03:55:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:44:56.963 03:55:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:44:56.963 03:55:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:44:56.963 03:55:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:44:56.963 03:55:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:44:56.963 03:55:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:44:56.963 03:55:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:44:56.963 03:55:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:44:56.963 03:55:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:44:56.963 03:55:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:44:56.963 03:55:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:44:56.963 03:55:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:44:56.963 03:55:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:44:56.963 03:55:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.963 03:55:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:56.963 bdev_null0 00:44:56.963 03:55:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.963 03:55:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:56.963 03:55:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.963 03:55:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:56.963 03:55:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.963 03:55:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:56.963 03:55:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.963 03:55:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:56.963 03:55:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.963 03:55:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:56.964 03:55:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.964 03:55:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:44:57.223 [2024-12-13 03:55:58.173897] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:57.223 03:55:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:57.223 03:55:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:44:57.223 03:55:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:44:57.223 03:55:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:44:57.223 03:55:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:44:57.223 03:55:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:57.223 03:55:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:44:57.223 03:55:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:57.223 03:55:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:57.223 03:55:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:44:57.223 03:55:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:57.223 { 00:44:57.223 "params": { 00:44:57.223 "name": "Nvme$subsystem", 00:44:57.223 "trtype": "$TEST_TRANSPORT", 00:44:57.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:57.223 "adrfam": "ipv4", 00:44:57.223 "trsvcid": "$NVMF_PORT", 00:44:57.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:57.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:57.223 "hdgst": ${hdgst:-false}, 00:44:57.223 "ddgst": ${ddgst:-false} 00:44:57.223 }, 00:44:57.223 "method": "bdev_nvme_attach_controller" 00:44:57.223 } 00:44:57.223 EOF 00:44:57.223 )") 00:44:57.223 03:55:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:44:57.223 03:55:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:44:57.223 03:55:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:57.223 03:55:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:44:57.223 03:55:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:44:57.223 03:55:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:57.223 03:55:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:44:57.223 03:55:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:44:57.223 03:55:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:57.223 03:55:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:44:57.223 03:55:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:44:57.223 03:55:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:57.223 03:55:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:44:57.223 03:55:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:44:57.223 03:55:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:57.223 03:55:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:44:57.223 03:55:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:44:57.223 03:55:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:57.223 "params": { 00:44:57.223 "name": "Nvme0", 00:44:57.223 "trtype": "tcp", 00:44:57.223 "traddr": "10.0.0.2", 00:44:57.223 "adrfam": "ipv4", 00:44:57.223 "trsvcid": "4420", 00:44:57.223 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:57.223 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:57.223 "hdgst": false, 00:44:57.223 "ddgst": false 00:44:57.223 }, 00:44:57.223 "method": "bdev_nvme_attach_controller" 00:44:57.223 }' 00:44:57.223 03:55:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:44:57.223 03:55:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:44:57.223 03:55:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:44:57.223 03:55:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:57.223 03:55:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:57.481 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:44:57.481 ... 00:44:57.481 fio-3.35 00:44:57.481 Starting 3 threads 00:45:04.054 00:45:04.054 filename0: (groupid=0, jobs=1): err= 0: pid=3020969: Fri Dec 13 03:56:04 2024 00:45:04.054 read: IOPS=281, BW=35.2MiB/s (37.0MB/s)(178MiB/5043msec) 00:45:04.054 slat (nsec): min=7541, max=56629, avg=16482.40, stdev=5098.28 00:45:04.054 clat (usec): min=6128, max=51588, avg=10590.02, stdev=4104.50 00:45:04.054 lat (usec): min=6148, max=51604, avg=10606.50, stdev=4104.61 00:45:04.054 clat percentiles (usec): 00:45:04.054 | 1.00th=[ 6652], 5.00th=[ 8291], 10.00th=[ 8717], 20.00th=[ 9241], 00:45:04.054 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10421], 00:45:04.054 | 70.00th=[10814], 80.00th=[11207], 90.00th=[11863], 95.00th=[12518], 00:45:04.054 | 99.00th=[14484], 99.50th=[50070], 99.90th=[51119], 99.95th=[51643], 00:45:04.054 | 99.99th=[51643] 00:45:04.054 bw ( KiB/s): min=32512, max=39424, per=36.21%, avg=36352.00, stdev=2548.60, samples=10 00:45:04.054 iops : min= 254, max= 308, avg=284.00, stdev=19.91, samples=10 00:45:04.054 lat (msec) : 10=45.99%, 20=53.02%, 50=0.42%, 100=0.56% 00:45:04.054 cpu : usr=95.22%, sys=4.38%, ctx=34, majf=0, minf=1635 00:45:04.054 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:04.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:04.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:04.054 issued rwts: total=1422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:04.054 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:04.054 filename0: (groupid=0, jobs=1): err= 0: pid=3020970: Fri Dec 13 03:56:04 2024 00:45:04.054 read: IOPS=235, BW=29.4MiB/s (30.8MB/s)(147MiB/5003msec) 00:45:04.054 slat (nsec): min=7459, max=53219, avg=16263.14, stdev=5108.54 00:45:04.054 clat (usec): min=3730, max=53421, avg=12729.90, stdev=4527.90 00:45:04.054 lat (usec): min=3745, max=53448, avg=12746.16, stdev=4527.75 00:45:04.054 clat percentiles (usec): 00:45:04.054 | 1.00th=[ 7308], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[11076], 00:45:04.054 | 30.00th=[11469], 40.00th=[11994], 50.00th=[12387], 60.00th=[12780], 00:45:04.054 | 70.00th=[13173], 80.00th=[13698], 90.00th=[14353], 95.00th=[15008], 00:45:04.054 | 99.00th=[49021], 99.50th=[49021], 99.90th=[52691], 99.95th=[53216], 00:45:04.054 | 99.99th=[53216] 00:45:04.054 bw ( KiB/s): min=23808, max=33280, per=29.78%, avg=29895.11, stdev=2826.97, samples=9 00:45:04.054 iops : min= 186, max= 260, avg=233.56, stdev=22.09, samples=9 00:45:04.054 lat (msec) : 4=0.08%, 10=8.33%, 20=90.31%, 50=0.85%, 100=0.42% 00:45:04.054 cpu : usr=95.32%, sys=4.28%, ctx=7, majf=0, minf=1638 00:45:04.054 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:04.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:04.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:04.054 issued rwts: total=1177,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:04.054 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:04.054 filename0: (groupid=0, jobs=1): err= 0: pid=3020971: Fri Dec 13 03:56:04 2024 00:45:04.054 read: IOPS=269, BW=33.6MiB/s (35.3MB/s)(170MiB/5045msec) 00:45:04.054 slat (nsec): min=7383, max=53153, avg=15781.83, stdev=4737.65 00:45:04.054 clat (usec): min=3834, max=47601, avg=11096.33, stdev=3017.64 00:45:04.054 lat (usec): min=3846, max=47619, avg=11112.12, stdev=3018.64 00:45:04.054 clat percentiles (usec): 00:45:04.054 | 1.00th=[ 4146], 5.00th=[ 7177], 10.00th=[ 8848], 20.00th=[ 9634], 00:45:04.054 | 30.00th=[10028], 40.00th=[10552], 50.00th=[10945], 60.00th=[11600], 00:45:04.054 | 70.00th=[12125], 80.00th=[12780], 90.00th=[13566], 95.00th=[14222], 00:45:04.054 | 99.00th=[15401], 99.50th=[15664], 99.90th=[46924], 99.95th=[47449], 00:45:04.054 | 99.99th=[47449] 00:45:04.054 bw ( KiB/s): min=30464, max=36864, per=34.55%, avg=34688.00, stdev=2013.03, samples=10 00:45:04.054 iops : min= 238, max= 288, avg=271.00, stdev=15.73, samples=10 00:45:04.054 lat (msec) : 4=0.22%, 10=28.50%, 20=70.91%, 50=0.37% 00:45:04.054 cpu : usr=95.30%, sys=4.32%, ctx=7, majf=0, minf=1634 00:45:04.054 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:04.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:04.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:04.054 issued rwts: total=1358,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:04.054 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:04.054 00:45:04.054 Run status group 0 (all jobs): 00:45:04.054 READ: bw=98.0MiB/s (103MB/s), 29.4MiB/s-35.2MiB/s (30.8MB/s-37.0MB/s), io=495MiB (519MB), run=5003-5045msec 00:45:04.620 ----------------------------------------------------- 00:45:04.620 Suppressions used: 00:45:04.620 count bytes template 00:45:04.620 5 44 /usr/src/fio/parse.c 00:45:04.620 1 8 libtcmalloc_minimal.so 00:45:04.620 1 904 libcrypto.so 00:45:04.620 ----------------------------------------------------- 00:45:04.620 00:45:04.620 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:45:04.620 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:04.620 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:04.620 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:04.620 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:04.620 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:04.620 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:04.620 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:04.880 bdev_null0 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:04.880 [2024-12-13 03:56:05.876290] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:04.880 bdev_null1 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:04.880 bdev_null2 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:04.880 03:56:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:04.880 { 00:45:04.880 "params": { 00:45:04.881 "name": "Nvme$subsystem", 00:45:04.881 "trtype": "$TEST_TRANSPORT", 00:45:04.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:04.881 "adrfam": "ipv4", 00:45:04.881 "trsvcid": "$NVMF_PORT", 00:45:04.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:04.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:04.881 "hdgst": ${hdgst:-false}, 00:45:04.881 "ddgst": ${ddgst:-false} 00:45:04.881 }, 00:45:04.881 "method": "bdev_nvme_attach_controller" 00:45:04.881 } 00:45:04.881 EOF 00:45:04.881 )") 00:45:04.881 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:04.881 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:04.881 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:04.881 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:04.881 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:04.881 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:04.881 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:45:04.881 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:04.881 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:04.881 03:56:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:04.881 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:04.881 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:04.881 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:04.881 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:04.881 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:45:04.881 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:04.881 03:56:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:04.881 03:56:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:04.881 { 00:45:04.881 "params": { 00:45:04.881 "name": "Nvme$subsystem", 00:45:04.881 "trtype": "$TEST_TRANSPORT", 00:45:04.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:04.881 "adrfam": "ipv4", 00:45:04.881 "trsvcid": "$NVMF_PORT", 00:45:04.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:04.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:04.881 "hdgst": ${hdgst:-false}, 00:45:04.881 "ddgst": ${ddgst:-false} 00:45:04.881 }, 00:45:04.881 "method": "bdev_nvme_attach_controller" 00:45:04.881 } 00:45:04.881 EOF 00:45:04.881 )") 00:45:04.881 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:04.881 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:04.881 03:56:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:04.881 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:04.881 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:04.881 03:56:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:04.881 03:56:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:04.881 03:56:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:04.881 { 00:45:04.881 "params": { 00:45:04.881 "name": "Nvme$subsystem", 00:45:04.881 "trtype": "$TEST_TRANSPORT", 00:45:04.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:04.881 "adrfam": "ipv4", 00:45:04.881 "trsvcid": "$NVMF_PORT", 00:45:04.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:04.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:04.881 "hdgst": ${hdgst:-false}, 00:45:04.881 "ddgst": ${ddgst:-false} 00:45:04.881 }, 00:45:04.881 "method": "bdev_nvme_attach_controller" 00:45:04.881 } 00:45:04.881 EOF 00:45:04.881 )") 00:45:04.881 03:56:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:04.881 03:56:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:45:04.881 03:56:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:45:04.881 03:56:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:04.881 "params": { 00:45:04.881 "name": "Nvme0", 00:45:04.881 "trtype": "tcp", 00:45:04.881 "traddr": "10.0.0.2", 00:45:04.881 "adrfam": "ipv4", 00:45:04.881 "trsvcid": "4420", 00:45:04.881 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:04.881 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:04.881 "hdgst": false, 00:45:04.881 "ddgst": false 00:45:04.881 }, 00:45:04.881 "method": "bdev_nvme_attach_controller" 00:45:04.881 },{ 00:45:04.881 "params": { 00:45:04.881 "name": "Nvme1", 00:45:04.881 "trtype": "tcp", 00:45:04.881 "traddr": "10.0.0.2", 00:45:04.881 "adrfam": "ipv4", 00:45:04.881 "trsvcid": "4420", 00:45:04.881 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:04.881 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:04.881 "hdgst": false, 00:45:04.881 "ddgst": false 00:45:04.881 }, 00:45:04.881 "method": "bdev_nvme_attach_controller" 00:45:04.881 },{ 00:45:04.881 "params": { 00:45:04.881 "name": "Nvme2", 00:45:04.881 "trtype": "tcp", 00:45:04.881 "traddr": "10.0.0.2", 00:45:04.881 "adrfam": "ipv4", 00:45:04.881 "trsvcid": "4420", 00:45:04.881 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:45:04.881 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:45:04.881 "hdgst": false, 00:45:04.881 "ddgst": false 00:45:04.881 }, 00:45:04.881 "method": "bdev_nvme_attach_controller" 00:45:04.881 }' 00:45:04.881 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:45:04.881 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:45:04.881 03:56:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:45:04.881 03:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:04.881 03:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:05.139 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:05.139 ... 00:45:05.139 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:05.139 ... 00:45:05.139 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:05.139 ... 00:45:05.139 fio-3.35 00:45:05.139 Starting 24 threads 00:45:17.348 00:45:17.348 filename0: (groupid=0, jobs=1): err= 0: pid=3022408: Fri Dec 13 03:56:17 2024 00:45:17.348 read: IOPS=453, BW=1814KiB/s (1857kB/s)(17.8MiB/10022msec) 00:45:17.348 slat (usec): min=7, max=105, avg=27.73, stdev=16.49 00:45:17.348 clat (usec): min=8809, max=45873, avg=35064.46, stdev=3094.11 00:45:17.348 lat (usec): min=8818, max=45897, avg=35092.19, stdev=3092.49 00:45:17.348 clat percentiles (usec): 00:45:17.348 | 1.00th=[15270], 5.00th=[33817], 10.00th=[33817], 20.00th=[34341], 00:45:17.348 | 30.00th=[34341], 40.00th=[34341], 50.00th=[35914], 60.00th=[35914], 00:45:17.348 | 70.00th=[36439], 80.00th=[36439], 90.00th=[36963], 95.00th=[36963], 00:45:17.348 | 99.00th=[39060], 99.50th=[43779], 99.90th=[44303], 99.95th=[44827], 00:45:17.348 | 99.99th=[45876] 00:45:17.348 bw ( KiB/s): min= 1664, max= 2048, per=4.20%, avg=1811.20, stdev=94.39, samples=20 00:45:17.348 iops : min= 416, max= 512, avg=452.80, stdev=23.60, samples=20 00:45:17.348 lat (msec) : 10=0.31%, 20=0.92%, 50=98.77% 00:45:17.348 cpu : usr=98.62%, sys=0.94%, ctx=41, majf=0, minf=1632 00:45:17.348 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:45:17.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.348 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.348 issued rwts: total=4544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:17.348 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:17.348 filename0: (groupid=0, jobs=1): err= 0: pid=3022409: Fri Dec 13 03:56:17 2024 00:45:17.348 read: IOPS=447, BW=1790KiB/s (1833kB/s)(17.5MiB/10012msec) 00:45:17.348 slat (usec): min=14, max=138, avg=46.15, stdev=22.69 00:45:17.348 clat (usec): min=21868, max=95961, avg=35304.90, stdev=3893.70 00:45:17.348 lat (usec): min=21895, max=95994, avg=35351.04, stdev=3890.31 00:45:17.348 clat percentiles (usec): 00:45:17.348 | 1.00th=[32900], 5.00th=[33424], 10.00th=[33817], 20.00th=[33817], 00:45:17.348 | 30.00th=[34341], 40.00th=[34341], 50.00th=[35390], 60.00th=[35914], 00:45:17.348 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36439], 95.00th=[36963], 00:45:17.348 | 99.00th=[37487], 99.50th=[38536], 99.90th=[95945], 99.95th=[95945], 00:45:17.348 | 99.99th=[95945] 00:45:17.348 bw ( KiB/s): min= 1536, max= 1920, per=4.14%, avg=1785.26, stdev=99.82, samples=19 00:45:17.348 iops : min= 384, max= 480, avg=446.32, stdev=24.96, samples=19 00:45:17.348 lat (msec) : 50=99.64%, 100=0.36% 00:45:17.348 cpu : usr=98.47%, sys=1.11%, ctx=13, majf=0, minf=1632 00:45:17.348 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:17.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.348 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.348 issued rwts: total=4480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:17.348 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:17.348 filename0: (groupid=0, jobs=1): err= 0: pid=3022410: Fri Dec 13 03:56:17 2024 00:45:17.348 read: IOPS=448, BW=1794KiB/s (1837kB/s)(17.6MiB/10026msec) 00:45:17.348 slat (usec): min=7, max=104, avg=33.96, stdev=21.30 00:45:17.348 clat (usec): min=25184, max=69675, avg=35339.58, stdev=2519.55 00:45:17.348 lat (usec): min=25200, max=69705, avg=35373.54, stdev=2514.41 00:45:17.348 clat percentiles (usec): 00:45:17.348 | 1.00th=[33162], 5.00th=[33817], 10.00th=[33817], 20.00th=[33817], 00:45:17.348 | 30.00th=[34341], 40.00th=[34341], 50.00th=[35914], 60.00th=[35914], 00:45:17.348 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36439], 95.00th=[36963], 00:45:17.348 | 99.00th=[38536], 99.50th=[39060], 99.90th=[69731], 99.95th=[69731], 00:45:17.348 | 99.99th=[69731] 00:45:17.348 bw ( KiB/s): min= 1532, max= 1920, per=4.14%, avg=1785.05, stdev=100.38, samples=19 00:45:17.348 iops : min= 383, max= 480, avg=446.26, stdev=25.10, samples=19 00:45:17.348 lat (msec) : 50=99.64%, 100=0.36% 00:45:17.348 cpu : usr=98.46%, sys=1.11%, ctx=22, majf=0, minf=1633 00:45:17.348 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:17.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.348 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.348 issued rwts: total=4496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:17.348 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:17.348 filename0: (groupid=0, jobs=1): err= 0: pid=3022411: Fri Dec 13 03:56:17 2024 00:45:17.348 read: IOPS=447, BW=1791KiB/s (1834kB/s)(17.5MiB/10011msec) 00:45:17.348 slat (usec): min=5, max=133, avg=30.10, stdev=21.61 00:45:17.348 clat (usec): min=14667, max=82072, avg=35447.38, stdev=5410.98 00:45:17.348 lat (usec): min=14729, max=82093, avg=35477.48, stdev=5409.20 00:45:17.348 clat percentiles (usec): 00:45:17.348 | 1.00th=[16909], 5.00th=[32113], 10.00th=[33817], 20.00th=[34341], 00:45:17.348 | 30.00th=[34341], 40.00th=[34341], 50.00th=[35914], 60.00th=[35914], 00:45:17.348 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36963], 95.00th=[37487], 00:45:17.348 | 99.00th=[56361], 99.50th=[59507], 99.90th=[82314], 99.95th=[82314], 00:45:17.348 | 99.99th=[82314] 00:45:17.348 bw ( KiB/s): min= 1536, max= 1952, per=4.14%, avg=1786.11, stdev=98.37, samples=19 00:45:17.348 iops : min= 384, max= 488, avg=446.53, stdev=24.59, samples=19 00:45:17.348 lat (msec) : 20=2.37%, 50=94.69%, 100=2.95% 00:45:17.348 cpu : usr=98.51%, sys=1.06%, ctx=16, majf=0, minf=1633 00:45:17.348 IO depths : 1=2.4%, 2=8.3%, 4=24.1%, 8=55.1%, 16=10.2%, 32=0.0%, >=64=0.0% 00:45:17.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.348 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.348 issued rwts: total=4482,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:17.348 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:17.348 filename0: (groupid=0, jobs=1): err= 0: pid=3022412: Fri Dec 13 03:56:17 2024 00:45:17.348 read: IOPS=446, BW=1787KiB/s (1830kB/s)(17.5MiB/10004msec) 00:45:17.348 slat (usec): min=5, max=134, avg=34.21, stdev=23.53 00:45:17.348 clat (msec): min=17, max=105, avg=35.46, stdev= 3.66 00:45:17.348 lat (msec): min=17, max=105, avg=35.50, stdev= 3.66 00:45:17.348 clat percentiles (msec): 00:45:17.348 | 1.00th=[ 32], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 35], 00:45:17.348 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 36], 60.00th=[ 36], 00:45:17.348 | 70.00th=[ 36], 80.00th=[ 37], 90.00th=[ 37], 95.00th=[ 37], 00:45:17.348 | 99.00th=[ 39], 99.50th=[ 56], 99.90th=[ 83], 99.95th=[ 83], 00:45:17.348 | 99.99th=[ 106] 00:45:17.348 bw ( KiB/s): min= 1536, max= 1920, per=4.13%, avg=1781.05, stdev=95.41, samples=19 00:45:17.348 iops : min= 384, max= 480, avg=445.26, stdev=23.85, samples=19 00:45:17.348 lat (msec) : 20=0.31%, 50=98.84%, 100=0.81%, 250=0.04% 00:45:17.348 cpu : usr=98.61%, sys=0.93%, ctx=20, majf=0, minf=1632 00:45:17.348 IO depths : 1=5.9%, 2=12.1%, 4=24.8%, 8=50.6%, 16=6.6%, 32=0.0%, >=64=0.0% 00:45:17.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.348 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.348 issued rwts: total=4470,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:17.348 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:17.348 filename0: (groupid=0, jobs=1): err= 0: pid=3022413: Fri Dec 13 03:56:17 2024 00:45:17.348 read: IOPS=449, BW=1797KiB/s (1840kB/s)(17.6MiB/10008msec) 00:45:17.348 slat (usec): min=4, max=109, avg=34.12, stdev=21.27 00:45:17.348 clat (usec): min=25187, max=49609, avg=35282.02, stdev=1645.71 00:45:17.348 lat (usec): min=25204, max=49626, avg=35316.15, stdev=1638.33 00:45:17.348 clat percentiles (usec): 00:45:17.348 | 1.00th=[33162], 5.00th=[33817], 10.00th=[33817], 20.00th=[33817], 00:45:17.348 | 30.00th=[34341], 40.00th=[34341], 50.00th=[35914], 60.00th=[35914], 00:45:17.348 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36439], 95.00th=[36963], 00:45:17.348 | 99.00th=[38536], 99.50th=[38536], 99.90th=[49546], 99.95th=[49546], 00:45:17.348 | 99.99th=[49546] 00:45:17.348 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1792.00, stdev=85.33, samples=19 00:45:17.348 iops : min= 416, max= 480, avg=448.00, stdev=21.33, samples=19 00:45:17.348 lat (msec) : 50=100.00% 00:45:17.348 cpu : usr=98.65%, sys=0.92%, ctx=15, majf=0, minf=1633 00:45:17.348 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:17.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.349 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.349 issued rwts: total=4496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:17.349 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:17.349 filename0: (groupid=0, jobs=1): err= 0: pid=3022414: Fri Dec 13 03:56:17 2024 00:45:17.349 read: IOPS=453, BW=1815KiB/s (1858kB/s)(17.8MiB/10015msec) 00:45:17.349 slat (usec): min=8, max=135, avg=43.98, stdev=24.70 00:45:17.349 clat (usec): min=8921, max=48374, avg=34887.33, stdev=2981.18 00:45:17.349 lat (usec): min=8931, max=48385, avg=34931.31, stdev=2979.03 00:45:17.349 clat percentiles (usec): 00:45:17.349 | 1.00th=[13304], 5.00th=[33424], 10.00th=[33817], 20.00th=[33817], 00:45:17.349 | 30.00th=[34341], 40.00th=[34341], 50.00th=[35914], 60.00th=[35914], 00:45:17.349 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36439], 95.00th=[36963], 00:45:17.349 | 99.00th=[37487], 99.50th=[38536], 99.90th=[43254], 99.95th=[43254], 00:45:17.349 | 99.99th=[48497] 00:45:17.349 bw ( KiB/s): min= 1664, max= 2048, per=4.20%, avg=1811.20, stdev=95.38, samples=20 00:45:17.349 iops : min= 416, max= 512, avg=452.80, stdev=23.85, samples=20 00:45:17.349 lat (msec) : 10=0.15%, 20=1.23%, 50=98.61% 00:45:17.349 cpu : usr=98.68%, sys=0.89%, ctx=16, majf=0, minf=1633 00:45:17.349 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:17.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.349 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.349 issued rwts: total=4544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:17.349 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:17.349 filename0: (groupid=0, jobs=1): err= 0: pid=3022415: Fri Dec 13 03:56:17 2024 00:45:17.349 read: IOPS=449, BW=1797KiB/s (1840kB/s)(17.6MiB/10006msec) 00:45:17.349 slat (usec): min=8, max=137, avg=45.71, stdev=23.70 00:45:17.349 clat (usec): min=25820, max=49605, avg=35203.16, stdev=1453.37 00:45:17.349 lat (usec): min=25850, max=49614, avg=35248.87, stdev=1443.42 00:45:17.349 clat percentiles (usec): 00:45:17.349 | 1.00th=[32900], 5.00th=[33424], 10.00th=[33817], 20.00th=[33817], 00:45:17.349 | 30.00th=[34341], 40.00th=[34341], 50.00th=[35914], 60.00th=[35914], 00:45:17.349 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36439], 95.00th=[36963], 00:45:17.349 | 99.00th=[38011], 99.50th=[38536], 99.90th=[49546], 99.95th=[49546], 00:45:17.349 | 99.99th=[49546] 00:45:17.349 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1792.16, stdev=85.09, samples=19 00:45:17.349 iops : min= 416, max= 480, avg=448.00, stdev=21.33, samples=19 00:45:17.349 lat (msec) : 50=100.00% 00:45:17.349 cpu : usr=98.40%, sys=1.17%, ctx=13, majf=0, minf=1636 00:45:17.349 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:17.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.349 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.349 issued rwts: total=4496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:17.349 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:17.349 filename1: (groupid=0, jobs=1): err= 0: pid=3022416: Fri Dec 13 03:56:17 2024 00:45:17.349 read: IOPS=447, BW=1789KiB/s (1832kB/s)(17.5MiB/10015msec) 00:45:17.349 slat (usec): min=9, max=139, avg=45.37, stdev=23.38 00:45:17.349 clat (usec): min=21823, max=97890, avg=35313.19, stdev=4043.50 00:45:17.349 lat (usec): min=21861, max=97920, avg=35358.56, stdev=4039.71 00:45:17.349 clat percentiles (usec): 00:45:17.349 | 1.00th=[32637], 5.00th=[33424], 10.00th=[33817], 20.00th=[33817], 00:45:17.349 | 30.00th=[33817], 40.00th=[34341], 50.00th=[35390], 60.00th=[35914], 00:45:17.349 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36439], 95.00th=[36963], 00:45:17.349 | 99.00th=[38011], 99.50th=[45876], 99.90th=[98042], 99.95th=[98042], 00:45:17.349 | 99.99th=[98042] 00:45:17.349 bw ( KiB/s): min= 1536, max= 1920, per=4.12%, avg=1778.53, stdev=94.40, samples=19 00:45:17.349 iops : min= 384, max= 480, avg=444.63, stdev=23.60, samples=19 00:45:17.349 lat (msec) : 50=99.64%, 100=0.36% 00:45:17.349 cpu : usr=98.46%, sys=1.11%, ctx=13, majf=0, minf=1635 00:45:17.349 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:17.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.349 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.349 issued rwts: total=4480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:17.349 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:17.349 filename1: (groupid=0, jobs=1): err= 0: pid=3022417: Fri Dec 13 03:56:17 2024 00:45:17.349 read: IOPS=452, BW=1809KiB/s (1852kB/s)(17.7MiB/10013msec) 00:45:17.349 slat (usec): min=4, max=137, avg=45.35, stdev=24.18 00:45:17.349 clat (usec): min=8363, max=42600, avg=34979.66, stdev=2547.99 00:45:17.349 lat (usec): min=8372, max=42631, avg=35025.00, stdev=2544.87 00:45:17.349 clat percentiles (usec): 00:45:17.349 | 1.00th=[22152], 5.00th=[33424], 10.00th=[33817], 20.00th=[33817], 00:45:17.349 | 30.00th=[34341], 40.00th=[34341], 50.00th=[35914], 60.00th=[35914], 00:45:17.349 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36439], 95.00th=[36963], 00:45:17.349 | 99.00th=[37487], 99.50th=[38536], 99.90th=[42730], 99.95th=[42730], 00:45:17.349 | 99.99th=[42730] 00:45:17.349 bw ( KiB/s): min= 1664, max= 1920, per=4.18%, avg=1804.80, stdev=82.01, samples=20 00:45:17.349 iops : min= 416, max= 480, avg=451.20, stdev=20.50, samples=20 00:45:17.349 lat (msec) : 10=0.35%, 20=0.35%, 50=99.29% 00:45:17.349 cpu : usr=98.54%, sys=1.03%, ctx=15, majf=0, minf=1635 00:45:17.349 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:17.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.349 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.349 issued rwts: total=4528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:17.349 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:17.349 filename1: (groupid=0, jobs=1): err= 0: pid=3022418: Fri Dec 13 03:56:17 2024 00:45:17.349 read: IOPS=447, BW=1789KiB/s (1832kB/s)(17.5MiB/10016msec) 00:45:17.349 slat (usec): min=8, max=137, avg=45.75, stdev=23.33 00:45:17.349 clat (msec): min=21, max=107, avg=35.32, stdev= 4.17 00:45:17.349 lat (msec): min=21, max=107, avg=35.36, stdev= 4.16 00:45:17.349 clat percentiles (msec): 00:45:17.349 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:45:17.349 | 30.00th=[ 34], 40.00th=[ 35], 50.00th=[ 36], 60.00th=[ 36], 00:45:17.349 | 70.00th=[ 36], 80.00th=[ 37], 90.00th=[ 37], 95.00th=[ 37], 00:45:17.349 | 99.00th=[ 39], 99.50th=[ 40], 99.90th=[ 100], 99.95th=[ 100], 00:45:17.349 | 99.99th=[ 108] 00:45:17.349 bw ( KiB/s): min= 1410, max= 1920, per=4.12%, avg=1778.63, stdev=111.67, samples=19 00:45:17.349 iops : min= 352, max= 480, avg=444.63, stdev=28.01, samples=19 00:45:17.349 lat (msec) : 50=99.64%, 100=0.31%, 250=0.04% 00:45:17.349 cpu : usr=98.34%, sys=1.21%, ctx=21, majf=0, minf=1635 00:45:17.349 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:45:17.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.349 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.349 issued rwts: total=4480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:17.349 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:17.349 filename1: (groupid=0, jobs=1): err= 0: pid=3022419: Fri Dec 13 03:56:17 2024 00:45:17.349 read: IOPS=449, BW=1797KiB/s (1840kB/s)(17.6MiB/10022msec) 00:45:17.349 slat (usec): min=7, max=135, avg=46.64, stdev=23.14 00:45:17.349 clat (msec): min=21, max=100, avg=35.17, stdev= 3.42 00:45:17.349 lat (msec): min=21, max=100, avg=35.22, stdev= 3.41 00:45:17.349 clat percentiles (msec): 00:45:17.349 | 1.00th=[ 27], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:45:17.349 | 30.00th=[ 34], 40.00th=[ 35], 50.00th=[ 36], 60.00th=[ 36], 00:45:17.349 | 70.00th=[ 36], 80.00th=[ 37], 90.00th=[ 37], 95.00th=[ 37], 00:45:17.349 | 99.00th=[ 39], 99.50th=[ 60], 99.90th=[ 78], 99.95th=[ 78], 00:45:17.349 | 99.99th=[ 102] 00:45:17.349 bw ( KiB/s): min= 1584, max= 1920, per=4.14%, avg=1787.79, stdev=93.58, samples=19 00:45:17.349 iops : min= 396, max= 480, avg=446.95, stdev=23.39, samples=19 00:45:17.349 lat (msec) : 50=99.47%, 100=0.49%, 250=0.04% 00:45:17.349 cpu : usr=98.78%, sys=0.78%, ctx=14, majf=0, minf=1635 00:45:17.349 IO depths : 1=6.1%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:17.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.349 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.349 issued rwts: total=4502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:17.349 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:17.349 filename1: (groupid=0, jobs=1): err= 0: pid=3022420: Fri Dec 13 03:56:17 2024 00:45:17.349 read: IOPS=447, BW=1789KiB/s (1832kB/s)(17.5MiB/10014msec) 00:45:17.349 slat (usec): min=5, max=134, avg=44.22, stdev=23.11 00:45:17.349 clat (usec): min=21802, max=96707, avg=35316.10, stdev=3950.15 00:45:17.349 lat (usec): min=21811, max=96727, avg=35360.32, stdev=3946.14 00:45:17.349 clat percentiles (usec): 00:45:17.349 | 1.00th=[32900], 5.00th=[33424], 10.00th=[33817], 20.00th=[33817], 00:45:17.349 | 30.00th=[34341], 40.00th=[34341], 50.00th=[35914], 60.00th=[35914], 00:45:17.349 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36439], 95.00th=[36963], 00:45:17.349 | 99.00th=[37487], 99.50th=[39060], 99.90th=[96994], 99.95th=[96994], 00:45:17.349 | 99.99th=[96994] 00:45:17.349 bw ( KiB/s): min= 1539, max= 1920, per=4.12%, avg=1778.68, stdev=93.97, samples=19 00:45:17.349 iops : min= 384, max= 480, avg=444.63, stdev=23.60, samples=19 00:45:17.349 lat (msec) : 50=99.64%, 100=0.36% 00:45:17.349 cpu : usr=98.63%, sys=0.93%, ctx=9, majf=0, minf=1635 00:45:17.349 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:17.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.349 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.349 issued rwts: total=4480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:17.349 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:17.349 filename1: (groupid=0, jobs=1): err= 0: pid=3022421: Fri Dec 13 03:56:17 2024 00:45:17.349 read: IOPS=447, BW=1791KiB/s (1834kB/s)(17.5MiB/10005msec) 00:45:17.349 slat (usec): min=8, max=138, avg=46.76, stdev=23.00 00:45:17.349 clat (usec): min=31054, max=74535, avg=35304.46, stdev=2647.04 00:45:17.349 lat (usec): min=31093, max=74561, avg=35351.22, stdev=2640.58 00:45:17.349 clat percentiles (usec): 00:45:17.349 | 1.00th=[33162], 5.00th=[33424], 10.00th=[33817], 20.00th=[33817], 00:45:17.349 | 30.00th=[34341], 40.00th=[34341], 50.00th=[35914], 60.00th=[35914], 00:45:17.349 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36439], 95.00th=[36963], 00:45:17.349 | 99.00th=[37487], 99.50th=[38536], 99.90th=[74974], 99.95th=[74974], 00:45:17.349 | 99.99th=[74974] 00:45:17.349 bw ( KiB/s): min= 1539, max= 1920, per=4.14%, avg=1785.42, stdev=99.41, samples=19 00:45:17.349 iops : min= 384, max= 480, avg=446.32, stdev=24.96, samples=19 00:45:17.349 lat (msec) : 50=99.64%, 100=0.36% 00:45:17.349 cpu : usr=98.56%, sys=1.00%, ctx=16, majf=0, minf=1636 00:45:17.349 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:17.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.349 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.349 issued rwts: total=4480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:17.349 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:17.349 filename1: (groupid=0, jobs=1): err= 0: pid=3022422: Fri Dec 13 03:56:17 2024 00:45:17.349 read: IOPS=448, BW=1792KiB/s (1835kB/s)(17.5MiB/10012msec) 00:45:17.349 slat (usec): min=8, max=135, avg=44.17, stdev=23.14 00:45:17.349 clat (msec): min=21, max=114, avg=35.26, stdev= 4.15 00:45:17.349 lat (msec): min=21, max=114, avg=35.31, stdev= 4.15 00:45:17.349 clat percentiles (msec): 00:45:17.349 | 1.00th=[ 29], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:45:17.349 | 30.00th=[ 34], 40.00th=[ 35], 50.00th=[ 36], 60.00th=[ 36], 00:45:17.349 | 70.00th=[ 36], 80.00th=[ 37], 90.00th=[ 37], 95.00th=[ 37], 00:45:17.349 | 99.00th=[ 39], 99.50th=[ 54], 99.90th=[ 95], 99.95th=[ 95], 00:45:17.349 | 99.99th=[ 114] 00:45:17.349 bw ( KiB/s): min= 1587, max= 1920, per=4.14%, avg=1787.95, stdev=93.21, samples=19 00:45:17.349 iops : min= 396, max= 480, avg=446.95, stdev=23.39, samples=19 00:45:17.349 lat (msec) : 50=99.47%, 100=0.49%, 250=0.04% 00:45:17.349 cpu : usr=98.72%, sys=0.85%, ctx=14, majf=0, minf=1633 00:45:17.349 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:17.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.349 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.349 issued rwts: total=4486,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:17.349 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:17.349 filename1: (groupid=0, jobs=1): err= 0: pid=3022423: Fri Dec 13 03:56:17 2024 00:45:17.349 read: IOPS=473, BW=1893KiB/s (1939kB/s)(18.5MiB/10015msec) 00:45:17.349 slat (usec): min=5, max=131, avg=23.74, stdev=21.47 00:45:17.349 clat (msec): min=11, max=114, avg=33.70, stdev= 6.71 00:45:17.349 lat (msec): min=11, max=114, avg=33.72, stdev= 6.71 00:45:17.349 clat percentiles (msec): 00:45:17.349 | 1.00th=[ 18], 5.00th=[ 23], 10.00th=[ 25], 20.00th=[ 31], 00:45:17.349 | 30.00th=[ 33], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 35], 00:45:17.349 | 70.00th=[ 36], 80.00th=[ 37], 90.00th=[ 37], 95.00th=[ 42], 00:45:17.349 | 99.00th=[ 51], 99.50th=[ 53], 99.90th=[ 96], 99.95th=[ 96], 00:45:17.349 | 99.99th=[ 115] 00:45:17.349 bw ( KiB/s): min= 1603, max= 2064, per=4.38%, avg=1889.00, stdev=91.77, samples=19 00:45:17.349 iops : min= 400, max= 516, avg=472.21, stdev=23.07, samples=19 00:45:17.349 lat (msec) : 20=1.77%, 50=96.79%, 100=1.39%, 250=0.04% 00:45:17.349 cpu : usr=98.48%, sys=1.09%, ctx=14, majf=0, minf=1633 00:45:17.349 IO depths : 1=0.1%, 2=1.1%, 4=5.7%, 8=77.6%, 16=15.6%, 32=0.0%, >=64=0.0% 00:45:17.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.349 complete : 0=0.0%, 4=89.8%, 8=7.7%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.349 issued rwts: total=4740,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:17.349 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:17.349 filename2: (groupid=0, jobs=1): err= 0: pid=3022424: Fri Dec 13 03:56:17 2024 00:45:17.349 read: IOPS=461, BW=1845KiB/s (1889kB/s)(18.1MiB/10027msec) 00:45:17.349 slat (usec): min=3, max=106, avg=29.12, stdev=16.35 00:45:17.349 clat (usec): min=1710, max=42609, avg=34459.27, stdev=5213.71 00:45:17.349 lat (usec): min=1721, max=42634, avg=34488.39, stdev=5215.06 00:45:17.349 clat percentiles (usec): 00:45:17.349 | 1.00th=[ 3982], 5.00th=[33817], 10.00th=[33817], 20.00th=[34341], 00:45:17.349 | 30.00th=[34341], 40.00th=[34341], 50.00th=[35914], 60.00th=[35914], 00:45:17.349 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36439], 95.00th=[36963], 00:45:17.349 | 99.00th=[38011], 99.50th=[39060], 99.90th=[42730], 99.95th=[42730], 00:45:17.349 | 99.99th=[42730] 00:45:17.349 bw ( KiB/s): min= 1664, max= 2688, per=4.27%, avg=1843.20, stdev=213.38, samples=20 00:45:17.350 iops : min= 416, max= 672, avg=460.80, stdev=53.34, samples=20 00:45:17.350 lat (msec) : 2=0.67%, 4=0.35%, 10=1.21%, 20=0.89%, 50=96.89% 00:45:17.350 cpu : usr=98.55%, sys=0.97%, ctx=34, majf=0, minf=1637 00:45:17.350 IO depths : 1=6.2%, 2=12.3%, 4=24.7%, 8=50.4%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:17.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.350 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.350 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:17.350 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:17.350 filename2: (groupid=0, jobs=1): err= 0: pid=3022425: Fri Dec 13 03:56:17 2024 00:45:17.350 read: IOPS=449, BW=1797KiB/s (1840kB/s)(17.6MiB/10008msec) 00:45:17.350 slat (usec): min=5, max=147, avg=30.52, stdev=15.10 00:45:17.350 clat (usec): min=24378, max=67720, avg=35362.69, stdev=1694.80 00:45:17.350 lat (usec): min=24387, max=67740, avg=35393.21, stdev=1689.60 00:45:17.350 clat percentiles (usec): 00:45:17.350 | 1.00th=[33162], 5.00th=[33817], 10.00th=[33817], 20.00th=[34341], 00:45:17.350 | 30.00th=[34341], 40.00th=[34866], 50.00th=[35914], 60.00th=[35914], 00:45:17.350 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36439], 95.00th=[36963], 00:45:17.350 | 99.00th=[38536], 99.50th=[38536], 99.90th=[49546], 99.95th=[49546], 00:45:17.350 | 99.99th=[67634] 00:45:17.350 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1792.00, stdev=85.33, samples=19 00:45:17.350 iops : min= 416, max= 480, avg=448.00, stdev=21.33, samples=19 00:45:17.350 lat (msec) : 50=99.96%, 100=0.04% 00:45:17.350 cpu : usr=98.57%, sys=0.95%, ctx=36, majf=0, minf=1635 00:45:17.350 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:17.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.350 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.350 issued rwts: total=4496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:17.350 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:17.350 filename2: (groupid=0, jobs=1): err= 0: pid=3022426: Fri Dec 13 03:56:17 2024 00:45:17.350 read: IOPS=447, BW=1791KiB/s (1834kB/s)(17.6MiB/10035msec) 00:45:17.350 slat (usec): min=6, max=146, avg=29.44, stdev=14.75 00:45:17.350 clat (usec): min=17107, max=79467, avg=35474.22, stdev=2351.11 00:45:17.350 lat (usec): min=17123, max=79492, avg=35503.67, stdev=2347.38 00:45:17.350 clat percentiles (usec): 00:45:17.350 | 1.00th=[33424], 5.00th=[33817], 10.00th=[33817], 20.00th=[34341], 00:45:17.350 | 30.00th=[34341], 40.00th=[34866], 50.00th=[35914], 60.00th=[35914], 00:45:17.350 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36439], 95.00th=[36963], 00:45:17.350 | 99.00th=[38536], 99.50th=[38536], 99.90th=[67634], 99.95th=[67634], 00:45:17.350 | 99.99th=[79168] 00:45:17.350 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1785.26, stdev=79.52, samples=19 00:45:17.350 iops : min= 416, max= 480, avg=446.32, stdev=19.88, samples=19 00:45:17.350 lat (msec) : 20=0.04%, 50=99.60%, 100=0.36% 00:45:17.350 cpu : usr=98.13%, sys=1.33%, ctx=25, majf=0, minf=1635 00:45:17.350 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:17.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.350 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.350 issued rwts: total=4494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:17.350 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:17.350 filename2: (groupid=0, jobs=1): err= 0: pid=3022427: Fri Dec 13 03:56:17 2024 00:45:17.350 read: IOPS=447, BW=1790KiB/s (1833kB/s)(17.5MiB/10010msec) 00:45:17.350 slat (nsec): min=6626, max=85496, avg=21430.57, stdev=16193.89 00:45:17.350 clat (usec): min=16692, max=81402, avg=35550.97, stdev=3197.61 00:45:17.350 lat (usec): min=16703, max=81427, avg=35572.40, stdev=3198.78 00:45:17.350 clat percentiles (usec): 00:45:17.350 | 1.00th=[33817], 5.00th=[34341], 10.00th=[34341], 20.00th=[34341], 00:45:17.350 | 30.00th=[34341], 40.00th=[34866], 50.00th=[35914], 60.00th=[35914], 00:45:17.350 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36439], 95.00th=[36963], 00:45:17.350 | 99.00th=[38011], 99.50th=[54789], 99.90th=[81265], 99.95th=[81265], 00:45:17.350 | 99.99th=[81265] 00:45:17.350 bw ( KiB/s): min= 1520, max= 1920, per=4.14%, avg=1785.26, stdev=102.22, samples=19 00:45:17.350 iops : min= 380, max= 480, avg=446.32, stdev=25.55, samples=19 00:45:17.350 lat (msec) : 20=0.22%, 50=99.20%, 100=0.58% 00:45:17.350 cpu : usr=98.49%, sys=1.02%, ctx=58, majf=0, minf=1634 00:45:17.350 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:17.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.350 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.350 issued rwts: total=4480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:17.350 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:17.350 filename2: (groupid=0, jobs=1): err= 0: pid=3022428: Fri Dec 13 03:56:17 2024 00:45:17.350 read: IOPS=447, BW=1790KiB/s (1833kB/s)(17.5MiB/10022msec) 00:45:17.350 slat (usec): min=4, max=143, avg=31.56, stdev=16.75 00:45:17.350 clat (usec): min=25335, max=81984, avg=35451.87, stdev=3351.44 00:45:17.350 lat (usec): min=25348, max=82000, avg=35483.44, stdev=3347.79 00:45:17.350 clat percentiles (usec): 00:45:17.350 | 1.00th=[29492], 5.00th=[33817], 10.00th=[33817], 20.00th=[34341], 00:45:17.350 | 30.00th=[34341], 40.00th=[34341], 50.00th=[35914], 60.00th=[35914], 00:45:17.350 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36439], 95.00th=[36963], 00:45:17.350 | 99.00th=[38536], 99.50th=[60031], 99.90th=[82314], 99.95th=[82314], 00:45:17.350 | 99.99th=[82314] 00:45:17.350 bw ( KiB/s): min= 1536, max= 1920, per=4.13%, avg=1781.05, stdev=91.77, samples=19 00:45:17.350 iops : min= 384, max= 480, avg=445.26, stdev=22.94, samples=19 00:45:17.350 lat (msec) : 50=99.42%, 100=0.58% 00:45:17.350 cpu : usr=98.34%, sys=1.11%, ctx=33, majf=0, minf=1634 00:45:17.350 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:17.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.350 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.350 issued rwts: total=4486,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:17.350 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:17.350 filename2: (groupid=0, jobs=1): err= 0: pid=3022429: Fri Dec 13 03:56:17 2024 00:45:17.350 read: IOPS=447, BW=1789KiB/s (1832kB/s)(17.5MiB/10029msec) 00:45:17.350 slat (nsec): min=5456, max=91515, avg=29024.16, stdev=15995.91 00:45:17.350 clat (usec): min=16411, max=84011, avg=35513.93, stdev=3654.74 00:45:17.350 lat (usec): min=16440, max=84030, avg=35542.96, stdev=3651.04 00:45:17.350 clat percentiles (usec): 00:45:17.350 | 1.00th=[29230], 5.00th=[33817], 10.00th=[33817], 20.00th=[34341], 00:45:17.350 | 30.00th=[34341], 40.00th=[34866], 50.00th=[35914], 60.00th=[35914], 00:45:17.350 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36963], 95.00th=[36963], 00:45:17.350 | 99.00th=[38011], 99.50th=[56361], 99.90th=[84411], 99.95th=[84411], 00:45:17.350 | 99.99th=[84411] 00:45:17.350 bw ( KiB/s): min= 1536, max= 1920, per=4.13%, avg=1781.05, stdev=95.41, samples=19 00:45:17.350 iops : min= 384, max= 480, avg=445.26, stdev=23.85, samples=19 00:45:17.350 lat (msec) : 20=0.27%, 50=98.89%, 100=0.85% 00:45:17.350 cpu : usr=98.30%, sys=1.17%, ctx=69, majf=0, minf=1632 00:45:17.350 IO depths : 1=5.9%, 2=12.0%, 4=24.7%, 8=50.7%, 16=6.7%, 32=0.0%, >=64=0.0% 00:45:17.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.350 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.350 issued rwts: total=4486,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:17.350 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:17.350 filename2: (groupid=0, jobs=1): err= 0: pid=3022430: Fri Dec 13 03:56:17 2024 00:45:17.350 read: IOPS=447, BW=1789KiB/s (1832kB/s)(17.5MiB/10018msec) 00:45:17.350 slat (usec): min=7, max=132, avg=48.37, stdev=22.63 00:45:17.350 clat (msec): min=21, max=108, avg=35.31, stdev= 4.24 00:45:17.350 lat (msec): min=21, max=108, avg=35.36, stdev= 4.23 00:45:17.350 clat percentiles (msec): 00:45:17.350 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:45:17.350 | 30.00th=[ 34], 40.00th=[ 35], 50.00th=[ 36], 60.00th=[ 36], 00:45:17.350 | 70.00th=[ 36], 80.00th=[ 37], 90.00th=[ 37], 95.00th=[ 37], 00:45:17.350 | 99.00th=[ 38], 99.50th=[ 39], 99.90th=[ 102], 99.95th=[ 102], 00:45:17.350 | 99.99th=[ 109] 00:45:17.350 bw ( KiB/s): min= 1408, max= 1920, per=4.12%, avg=1778.53, stdev=112.03, samples=19 00:45:17.350 iops : min= 352, max= 480, avg=444.63, stdev=28.01, samples=19 00:45:17.350 lat (msec) : 50=99.64%, 250=0.36% 00:45:17.350 cpu : usr=98.47%, sys=1.09%, ctx=14, majf=0, minf=1633 00:45:17.350 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:17.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.350 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.350 issued rwts: total=4480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:17.350 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:17.350 filename2: (groupid=0, jobs=1): err= 0: pid=3022431: Fri Dec 13 03:56:17 2024 00:45:17.350 read: IOPS=449, BW=1798KiB/s (1841kB/s)(17.6MiB/10004msec) 00:45:17.350 slat (usec): min=5, max=143, avg=17.95, stdev=11.26 00:45:17.350 clat (usec): min=16755, max=49728, avg=35445.02, stdev=1563.28 00:45:17.350 lat (usec): min=16806, max=49757, avg=35462.97, stdev=1562.10 00:45:17.350 clat percentiles (usec): 00:45:17.350 | 1.00th=[33817], 5.00th=[34341], 10.00th=[34341], 20.00th=[34341], 00:45:17.350 | 30.00th=[34341], 40.00th=[34866], 50.00th=[35914], 60.00th=[35914], 00:45:17.350 | 70.00th=[36439], 80.00th=[36439], 90.00th=[36963], 95.00th=[36963], 00:45:17.350 | 99.00th=[38011], 99.50th=[39060], 99.90th=[49546], 99.95th=[49546], 00:45:17.350 | 99.99th=[49546] 00:45:17.350 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1792.00, stdev=73.90, samples=19 00:45:17.350 iops : min= 416, max= 480, avg=448.00, stdev=18.48, samples=19 00:45:17.350 lat (msec) : 20=0.20%, 50=99.80% 00:45:17.350 cpu : usr=98.31%, sys=1.18%, ctx=40, majf=0, minf=1634 00:45:17.350 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:17.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.350 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:17.350 issued rwts: total=4496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:17.350 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:17.350 00:45:17.350 Run status group 0 (all jobs): 00:45:17.350 READ: bw=42.1MiB/s (44.2MB/s), 1787KiB/s-1893KiB/s (1830kB/s-1939kB/s), io=423MiB (443MB), run=10004-10035msec 00:45:17.917 ----------------------------------------------------- 00:45:17.917 Suppressions used: 00:45:17.917 count bytes template 00:45:17.917 45 402 /usr/src/fio/parse.c 00:45:17.917 1 8 libtcmalloc_minimal.so 00:45:17.917 1 904 libcrypto.so 00:45:17.917 ----------------------------------------------------- 00:45:17.917 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:17.917 bdev_null0 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:17.917 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:17.918 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:17.918 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:17.918 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:17.918 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:18.177 [2024-12-13 03:56:19.134596] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:18.177 bdev_null1 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:18.177 { 00:45:18.177 "params": { 00:45:18.177 "name": "Nvme$subsystem", 00:45:18.177 "trtype": "$TEST_TRANSPORT", 00:45:18.177 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:18.177 "adrfam": "ipv4", 00:45:18.177 "trsvcid": "$NVMF_PORT", 00:45:18.177 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:18.177 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:18.177 "hdgst": ${hdgst:-false}, 00:45:18.177 "ddgst": ${ddgst:-false} 00:45:18.177 }, 00:45:18.177 "method": "bdev_nvme_attach_controller" 00:45:18.177 } 00:45:18.177 EOF 00:45:18.177 )") 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:18.177 { 00:45:18.177 "params": { 00:45:18.177 "name": "Nvme$subsystem", 00:45:18.177 "trtype": "$TEST_TRANSPORT", 00:45:18.177 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:18.177 "adrfam": "ipv4", 00:45:18.177 "trsvcid": "$NVMF_PORT", 00:45:18.177 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:18.177 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:18.177 "hdgst": ${hdgst:-false}, 00:45:18.177 "ddgst": ${ddgst:-false} 00:45:18.177 }, 00:45:18.177 "method": "bdev_nvme_attach_controller" 00:45:18.177 } 00:45:18.177 EOF 00:45:18.177 )") 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:18.177 "params": { 00:45:18.177 "name": "Nvme0", 00:45:18.177 "trtype": "tcp", 00:45:18.177 "traddr": "10.0.0.2", 00:45:18.177 "adrfam": "ipv4", 00:45:18.177 "trsvcid": "4420", 00:45:18.177 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:18.177 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:18.177 "hdgst": false, 00:45:18.177 "ddgst": false 00:45:18.177 }, 00:45:18.177 "method": "bdev_nvme_attach_controller" 00:45:18.177 },{ 00:45:18.177 "params": { 00:45:18.177 "name": "Nvme1", 00:45:18.177 "trtype": "tcp", 00:45:18.177 "traddr": "10.0.0.2", 00:45:18.177 "adrfam": "ipv4", 00:45:18.177 "trsvcid": "4420", 00:45:18.177 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:18.177 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:18.177 "hdgst": false, 00:45:18.177 "ddgst": false 00:45:18.177 }, 00:45:18.177 "method": "bdev_nvme_attach_controller" 00:45:18.177 }' 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:18.177 03:56:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:18.436 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:45:18.436 ... 00:45:18.436 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:45:18.436 ... 00:45:18.436 fio-3.35 00:45:18.436 Starting 4 threads 00:45:25.002 00:45:25.002 filename0: (groupid=0, jobs=1): err= 0: pid=3024556: Fri Dec 13 03:56:25 2024 00:45:25.002 read: IOPS=2245, BW=17.5MiB/s (18.4MB/s)(87.8MiB/5004msec) 00:45:25.002 slat (nsec): min=7107, max=36686, avg=11906.58, stdev=4205.78 00:45:25.002 clat (usec): min=803, max=6551, avg=3524.20, stdev=471.56 00:45:25.002 lat (usec): min=817, max=6565, avg=3536.11, stdev=471.39 00:45:25.002 clat percentiles (usec): 00:45:25.002 | 1.00th=[ 2376], 5.00th=[ 2835], 10.00th=[ 3032], 20.00th=[ 3294], 00:45:25.002 | 30.00th=[ 3425], 40.00th=[ 3490], 50.00th=[ 3490], 60.00th=[ 3523], 00:45:25.002 | 70.00th=[ 3556], 80.00th=[ 3687], 90.00th=[ 3982], 95.00th=[ 4359], 00:45:25.002 | 99.00th=[ 5342], 99.50th=[ 5604], 99.90th=[ 5997], 99.95th=[ 6259], 00:45:25.002 | 99.99th=[ 6521] 00:45:25.002 bw ( KiB/s): min=17088, max=19120, per=24.89%, avg=17972.80, stdev=534.51, samples=10 00:45:25.002 iops : min= 2136, max= 2390, avg=2246.60, stdev=66.81, samples=10 00:45:25.002 lat (usec) : 1000=0.02% 00:45:25.002 lat (msec) : 2=0.28%, 4=90.32%, 10=9.39% 00:45:25.002 cpu : usr=95.72%, sys=3.88%, ctx=9, majf=0, minf=1632 00:45:25.002 IO depths : 1=0.4%, 2=6.9%, 4=64.5%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:25.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.002 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.002 issued rwts: total=11237,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:25.002 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:25.002 filename0: (groupid=0, jobs=1): err= 0: pid=3024557: Fri Dec 13 03:56:25 2024 00:45:25.002 read: IOPS=2218, BW=17.3MiB/s (18.2MB/s)(86.7MiB/5002msec) 00:45:25.002 slat (nsec): min=7108, max=36469, avg=12155.55, stdev=4067.64 00:45:25.002 clat (usec): min=718, max=6936, avg=3566.06, stdev=501.38 00:45:25.002 lat (usec): min=732, max=6964, avg=3578.22, stdev=501.01 00:45:25.002 clat percentiles (usec): 00:45:25.002 | 1.00th=[ 2474], 5.00th=[ 2900], 10.00th=[ 3130], 20.00th=[ 3359], 00:45:25.002 | 30.00th=[ 3458], 40.00th=[ 3490], 50.00th=[ 3490], 60.00th=[ 3523], 00:45:25.002 | 70.00th=[ 3589], 80.00th=[ 3720], 90.00th=[ 4047], 95.00th=[ 4490], 00:45:25.002 | 99.00th=[ 5604], 99.50th=[ 5932], 99.90th=[ 6521], 99.95th=[ 6652], 00:45:25.002 | 99.99th=[ 6783] 00:45:25.002 bw ( KiB/s): min=17200, max=18176, per=24.57%, avg=17742.40, stdev=277.99, samples=10 00:45:25.002 iops : min= 2150, max= 2272, avg=2217.80, stdev=34.75, samples=10 00:45:25.002 lat (usec) : 750=0.01%, 1000=0.01% 00:45:25.002 lat (msec) : 2=0.32%, 4=88.73%, 10=10.93% 00:45:25.002 cpu : usr=95.60%, sys=4.00%, ctx=9, majf=0, minf=1633 00:45:25.002 IO depths : 1=0.2%, 2=9.9%, 4=61.6%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:25.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.002 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.002 issued rwts: total=11097,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:25.002 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:25.002 filename1: (groupid=0, jobs=1): err= 0: pid=3024558: Fri Dec 13 03:56:25 2024 00:45:25.002 read: IOPS=2252, BW=17.6MiB/s (18.5MB/s)(88.0MiB/5003msec) 00:45:25.002 slat (nsec): min=5729, max=35056, avg=12011.70, stdev=4029.34 00:45:25.002 clat (usec): min=751, max=6347, avg=3511.71, stdev=514.32 00:45:25.002 lat (usec): min=763, max=6361, avg=3523.72, stdev=514.11 00:45:25.002 clat percentiles (usec): 00:45:25.002 | 1.00th=[ 2409], 5.00th=[ 2802], 10.00th=[ 2966], 20.00th=[ 3195], 00:45:25.002 | 30.00th=[ 3392], 40.00th=[ 3458], 50.00th=[ 3490], 60.00th=[ 3523], 00:45:25.002 | 70.00th=[ 3556], 80.00th=[ 3654], 90.00th=[ 3982], 95.00th=[ 4490], 00:45:25.002 | 99.00th=[ 5538], 99.50th=[ 5735], 99.90th=[ 6194], 99.95th=[ 6259], 00:45:25.002 | 99.99th=[ 6325] 00:45:25.002 bw ( KiB/s): min=17088, max=18720, per=24.96%, avg=18022.40, stdev=421.31, samples=10 00:45:25.002 iops : min= 2136, max= 2340, avg=2252.80, stdev=52.66, samples=10 00:45:25.002 lat (usec) : 1000=0.03% 00:45:25.002 lat (msec) : 2=0.38%, 4=89.66%, 10=9.93% 00:45:25.002 cpu : usr=96.04%, sys=3.56%, ctx=9, majf=0, minf=1631 00:45:25.002 IO depths : 1=0.5%, 2=9.1%, 4=63.0%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:25.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.002 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.002 issued rwts: total=11269,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:25.002 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:25.002 filename1: (groupid=0, jobs=1): err= 0: pid=3024559: Fri Dec 13 03:56:25 2024 00:45:25.002 read: IOPS=2311, BW=18.1MiB/s (18.9MB/s)(90.3MiB/5001msec) 00:45:25.002 slat (nsec): min=5070, max=35075, avg=11254.74, stdev=3949.47 00:45:25.002 clat (usec): min=724, max=7081, avg=3427.07, stdev=435.41 00:45:25.002 lat (usec): min=732, max=7100, avg=3438.33, stdev=435.47 00:45:25.002 clat percentiles (usec): 00:45:25.002 | 1.00th=[ 2278], 5.00th=[ 2704], 10.00th=[ 2900], 20.00th=[ 3130], 00:45:25.002 | 30.00th=[ 3326], 40.00th=[ 3458], 50.00th=[ 3490], 60.00th=[ 3523], 00:45:25.002 | 70.00th=[ 3556], 80.00th=[ 3589], 90.00th=[ 3818], 95.00th=[ 4080], 00:45:25.002 | 99.00th=[ 4883], 99.50th=[ 5145], 99.90th=[ 5800], 99.95th=[ 5997], 00:45:25.002 | 99.99th=[ 6063] 00:45:25.002 bw ( KiB/s): min=17712, max=19520, per=25.48%, avg=18394.67, stdev=565.46, samples=9 00:45:25.002 iops : min= 2214, max= 2440, avg=2299.33, stdev=70.68, samples=9 00:45:25.002 lat (usec) : 750=0.03%, 1000=0.03% 00:45:25.002 lat (msec) : 2=0.53%, 4=93.14%, 10=6.27% 00:45:25.002 cpu : usr=95.90%, sys=3.72%, ctx=6, majf=0, minf=1635 00:45:25.002 IO depths : 1=0.2%, 2=5.8%, 4=66.1%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:25.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.002 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.002 issued rwts: total=11559,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:25.002 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:25.002 00:45:25.003 Run status group 0 (all jobs): 00:45:25.003 READ: bw=70.5MiB/s (73.9MB/s), 17.3MiB/s-18.1MiB/s (18.2MB/s-18.9MB/s), io=353MiB (370MB), run=5001-5004msec 00:45:25.937 ----------------------------------------------------- 00:45:25.937 Suppressions used: 00:45:25.937 count bytes template 00:45:25.937 6 52 /usr/src/fio/parse.c 00:45:25.937 1 8 libtcmalloc_minimal.so 00:45:25.937 1 904 libcrypto.so 00:45:25.937 ----------------------------------------------------- 00:45:25.937 00:45:25.937 03:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:45:25.937 03:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:25.937 03:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:25.937 03:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:25.937 03:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:25.937 03:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:25.937 03:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:25.937 03:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:25.937 03:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:25.937 03:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:25.937 03:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:25.937 03:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:25.937 03:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:25.937 03:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:25.937 03:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:25.937 03:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:45:25.937 03:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:25.937 03:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:25.937 03:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:25.938 03:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:25.938 03:56:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:25.938 03:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:25.938 03:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:25.938 03:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:25.938 00:45:25.938 real 0m28.767s 00:45:25.938 user 4m56.767s 00:45:25.938 sys 0m5.572s 00:45:25.938 03:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:25.938 03:56:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:25.938 ************************************ 00:45:25.938 END TEST fio_dif_rand_params 00:45:25.938 ************************************ 00:45:25.938 03:56:26 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:45:25.938 03:56:26 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:25.938 03:56:26 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:25.938 03:56:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:25.938 ************************************ 00:45:25.938 START TEST fio_dif_digest 00:45:25.938 ************************************ 00:45:25.938 03:56:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:45:25.938 03:56:26 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:45:25.938 03:56:26 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:45:25.938 03:56:26 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:45:25.938 03:56:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:45:25.938 03:56:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:45:25.938 03:56:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:45:25.938 03:56:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:45:25.938 03:56:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:45:25.938 03:56:26 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:45:25.938 03:56:26 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:45:25.938 03:56:26 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:45:25.938 03:56:26 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:45:25.938 03:56:26 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:45:25.938 03:56:26 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:45:25.938 03:56:26 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:45:25.938 03:56:26 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:45:25.938 03:56:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:25.938 03:56:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:25.938 bdev_null0 00:45:25.938 03:56:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:25.938 03:56:26 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:25.938 03:56:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:25.938 03:56:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:25.938 [2024-12-13 03:56:27.017852] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:25.938 { 00:45:25.938 "params": { 00:45:25.938 "name": "Nvme$subsystem", 00:45:25.938 "trtype": "$TEST_TRANSPORT", 00:45:25.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:25.938 "adrfam": "ipv4", 00:45:25.938 "trsvcid": "$NVMF_PORT", 00:45:25.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:25.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:25.938 "hdgst": ${hdgst:-false}, 00:45:25.938 "ddgst": ${ddgst:-false} 00:45:25.938 }, 00:45:25.938 "method": "bdev_nvme_attach_controller" 00:45:25.938 } 00:45:25.938 EOF 00:45:25.938 )") 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:25.938 "params": { 00:45:25.938 "name": "Nvme0", 00:45:25.938 "trtype": "tcp", 00:45:25.938 "traddr": "10.0.0.2", 00:45:25.938 "adrfam": "ipv4", 00:45:25.938 "trsvcid": "4420", 00:45:25.938 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:25.938 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:25.938 "hdgst": true, 00:45:25.938 "ddgst": true 00:45:25.938 }, 00:45:25.938 "method": "bdev_nvme_attach_controller" 00:45:25.938 }' 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # break 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:25.938 03:56:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:26.504 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:45:26.504 ... 00:45:26.504 fio-3.35 00:45:26.504 Starting 3 threads 00:45:38.701 00:45:38.701 filename0: (groupid=0, jobs=1): err= 0: pid=3025811: Fri Dec 13 03:56:38 2024 00:45:38.701 read: IOPS=247, BW=30.9MiB/s (32.4MB/s)(311MiB/10047msec) 00:45:38.701 slat (nsec): min=7494, max=61392, avg=25613.35, stdev=7885.19 00:45:38.701 clat (usec): min=8796, max=51935, avg=12088.03, stdev=1374.64 00:45:38.701 lat (usec): min=8809, max=51968, avg=12113.64, stdev=1374.05 00:45:38.701 clat percentiles (usec): 00:45:38.701 | 1.00th=[10159], 5.00th=[10814], 10.00th=[10945], 20.00th=[11338], 00:45:38.701 | 30.00th=[11600], 40.00th=[11863], 50.00th=[11994], 60.00th=[12256], 00:45:38.701 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13042], 95.00th=[13435], 00:45:38.701 | 99.00th=[14222], 99.50th=[14484], 99.90th=[16057], 99.95th=[49546], 00:45:38.701 | 99.99th=[52167] 00:45:38.701 bw ( KiB/s): min=30208, max=32512, per=35.85%, avg=31769.60, stdev=580.81, samples=20 00:45:38.701 iops : min= 236, max= 254, avg=248.20, stdev= 4.54, samples=20 00:45:38.701 lat (msec) : 10=0.48%, 20=99.44%, 50=0.04%, 100=0.04% 00:45:38.701 cpu : usr=96.24%, sys=3.06%, ctx=389, majf=0, minf=1634 00:45:38.701 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:38.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:38.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:38.701 issued rwts: total=2484,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:38.701 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:38.701 filename0: (groupid=0, jobs=1): err= 0: pid=3025812: Fri Dec 13 03:56:38 2024 00:45:38.701 read: IOPS=224, BW=28.0MiB/s (29.4MB/s)(282MiB/10045msec) 00:45:38.701 slat (nsec): min=7488, max=45130, avg=19843.60, stdev=6057.61 00:45:38.701 clat (usec): min=10155, max=53800, avg=13327.43, stdev=1447.90 00:45:38.701 lat (usec): min=10169, max=53814, avg=13347.28, stdev=1448.18 00:45:38.701 clat percentiles (usec): 00:45:38.701 | 1.00th=[11207], 5.00th=[11863], 10.00th=[12256], 20.00th=[12649], 00:45:38.701 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13304], 60.00th=[13435], 00:45:38.701 | 70.00th=[13698], 80.00th=[13960], 90.00th=[14484], 95.00th=[14877], 00:45:38.701 | 99.00th=[15795], 99.50th=[16057], 99.90th=[16909], 99.95th=[49021], 00:45:38.701 | 99.99th=[53740] 00:45:38.701 bw ( KiB/s): min=27648, max=29696, per=32.53%, avg=28825.60, stdev=534.41, samples=20 00:45:38.701 iops : min= 216, max= 232, avg=225.20, stdev= 4.18, samples=20 00:45:38.701 lat (msec) : 20=99.91%, 50=0.04%, 100=0.04% 00:45:38.701 cpu : usr=95.90%, sys=3.74%, ctx=17, majf=0, minf=1633 00:45:38.701 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:38.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:38.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:38.701 issued rwts: total=2254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:38.701 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:38.701 filename0: (groupid=0, jobs=1): err= 0: pid=3025813: Fri Dec 13 03:56:38 2024 00:45:38.701 read: IOPS=220, BW=27.6MiB/s (28.9MB/s)(277MiB/10045msec) 00:45:38.701 slat (nsec): min=7538, max=45059, avg=19403.08, stdev=5937.59 00:45:38.701 clat (usec): min=10765, max=52159, avg=13545.46, stdev=1367.29 00:45:38.701 lat (usec): min=10780, max=52182, avg=13564.86, stdev=1367.72 00:45:38.701 clat percentiles (usec): 00:45:38.701 | 1.00th=[11600], 5.00th=[12125], 10.00th=[12518], 20.00th=[12780], 00:45:38.701 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13435], 60.00th=[13698], 00:45:38.701 | 70.00th=[13960], 80.00th=[14222], 90.00th=[14615], 95.00th=[15008], 00:45:38.701 | 99.00th=[15795], 99.50th=[16057], 99.90th=[17171], 99.95th=[44827], 00:45:38.701 | 99.99th=[52167] 00:45:38.701 bw ( KiB/s): min=27648, max=28928, per=32.01%, avg=28364.80, stdev=367.71, samples=20 00:45:38.701 iops : min= 216, max= 226, avg=221.60, stdev= 2.87, samples=20 00:45:38.701 lat (msec) : 20=99.91%, 50=0.05%, 100=0.05% 00:45:38.701 cpu : usr=95.74%, sys=3.91%, ctx=17, majf=0, minf=1638 00:45:38.701 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:38.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:38.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:38.701 issued rwts: total=2218,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:38.701 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:38.701 00:45:38.701 Run status group 0 (all jobs): 00:45:38.701 READ: bw=86.5MiB/s (90.7MB/s), 27.6MiB/s-30.9MiB/s (28.9MB/s-32.4MB/s), io=870MiB (912MB), run=10045-10047msec 00:45:38.701 ----------------------------------------------------- 00:45:38.701 Suppressions used: 00:45:38.701 count bytes template 00:45:38.701 5 44 /usr/src/fio/parse.c 00:45:38.701 1 8 libtcmalloc_minimal.so 00:45:38.701 1 904 libcrypto.so 00:45:38.701 ----------------------------------------------------- 00:45:38.701 00:45:38.701 03:56:39 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:45:38.701 03:56:39 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:45:38.701 03:56:39 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:45:38.701 03:56:39 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:38.701 03:56:39 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:45:38.701 03:56:39 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:38.701 03:56:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:38.701 03:56:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:38.701 03:56:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:38.701 03:56:39 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:38.701 03:56:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:38.701 03:56:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:38.701 03:56:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:38.701 00:45:38.701 real 0m12.568s 00:45:38.701 user 0m37.066s 00:45:38.701 sys 0m1.623s 00:45:38.701 03:56:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:38.701 03:56:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:38.701 ************************************ 00:45:38.701 END TEST fio_dif_digest 00:45:38.701 ************************************ 00:45:38.701 03:56:39 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:45:38.701 03:56:39 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:45:38.701 03:56:39 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:45:38.701 03:56:39 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:45:38.701 03:56:39 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:38.701 03:56:39 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:45:38.701 03:56:39 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:38.701 03:56:39 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:38.701 rmmod nvme_tcp 00:45:38.701 rmmod nvme_fabrics 00:45:38.701 rmmod nvme_keyring 00:45:38.701 03:56:39 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:38.701 03:56:39 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:45:38.701 03:56:39 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:45:38.701 03:56:39 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3016222 ']' 00:45:38.701 03:56:39 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3016222 00:45:38.701 03:56:39 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 3016222 ']' 00:45:38.701 03:56:39 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 3016222 00:45:38.701 03:56:39 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:45:38.701 03:56:39 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:38.701 03:56:39 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3016222 00:45:38.701 03:56:39 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:38.701 03:56:39 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:38.701 03:56:39 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3016222' 00:45:38.701 killing process with pid 3016222 00:45:38.701 03:56:39 nvmf_dif -- common/autotest_common.sh@973 -- # kill 3016222 00:45:38.701 03:56:39 nvmf_dif -- common/autotest_common.sh@978 -- # wait 3016222 00:45:39.636 03:56:40 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:45:39.636 03:56:40 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:45:42.169 Waiting for block devices as requested 00:45:42.169 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:45:42.169 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:45:42.169 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:45:42.169 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:45:42.169 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:45:42.427 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:45:42.427 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:45:42.427 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:45:42.427 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:45:42.686 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:45:42.686 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:45:42.686 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:45:42.945 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:45:42.945 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:45:42.945 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:45:43.204 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:45:43.204 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:45:43.204 03:56:44 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:45:43.204 03:56:44 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:45:43.204 03:56:44 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:45:43.204 03:56:44 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:45:43.204 03:56:44 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:45:43.204 03:56:44 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:45:43.204 03:56:44 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:43.204 03:56:44 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:43.204 03:56:44 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:43.204 03:56:44 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:43.204 03:56:44 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:45.735 03:56:46 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:45.735 00:45:45.735 real 1m22.934s 00:45:45.735 user 7m28.759s 00:45:45.735 sys 0m20.090s 00:45:45.735 03:56:46 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:45.735 03:56:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:45.735 ************************************ 00:45:45.735 END TEST nvmf_dif 00:45:45.735 ************************************ 00:45:45.735 03:56:46 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:45:45.735 03:56:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:45.735 03:56:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:45.735 03:56:46 -- common/autotest_common.sh@10 -- # set +x 00:45:45.735 ************************************ 00:45:45.735 START TEST nvmf_abort_qd_sizes 00:45:45.735 ************************************ 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:45:45.735 * Looking for test storage... 00:45:45.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:45:45.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:45.735 --rc genhtml_branch_coverage=1 00:45:45.735 --rc genhtml_function_coverage=1 00:45:45.735 --rc genhtml_legend=1 00:45:45.735 --rc geninfo_all_blocks=1 00:45:45.735 --rc geninfo_unexecuted_blocks=1 00:45:45.735 00:45:45.735 ' 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:45:45.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:45.735 --rc genhtml_branch_coverage=1 00:45:45.735 --rc genhtml_function_coverage=1 00:45:45.735 --rc genhtml_legend=1 00:45:45.735 --rc geninfo_all_blocks=1 00:45:45.735 --rc geninfo_unexecuted_blocks=1 00:45:45.735 00:45:45.735 ' 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:45:45.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:45.735 --rc genhtml_branch_coverage=1 00:45:45.735 --rc genhtml_function_coverage=1 00:45:45.735 --rc genhtml_legend=1 00:45:45.735 --rc geninfo_all_blocks=1 00:45:45.735 --rc geninfo_unexecuted_blocks=1 00:45:45.735 00:45:45.735 ' 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:45:45.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:45.735 --rc genhtml_branch_coverage=1 00:45:45.735 --rc genhtml_function_coverage=1 00:45:45.735 --rc genhtml_legend=1 00:45:45.735 --rc geninfo_all_blocks=1 00:45:45.735 --rc geninfo_unexecuted_blocks=1 00:45:45.735 00:45:45.735 ' 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:45.735 03:56:46 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:45.736 03:56:46 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:45.736 03:56:46 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:45.736 03:56:46 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:45.736 03:56:46 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:45.736 03:56:46 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:45:45.736 03:56:46 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:45.736 03:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:45:45.736 03:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:45.736 03:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:45.736 03:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:45.736 03:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:45.736 03:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:45.736 03:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:45.736 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:45.736 03:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:45.736 03:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:45.736 03:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:45.736 03:56:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:45:45.736 03:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:45:45.736 03:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:45.736 03:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:45:45.736 03:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:45:45.736 03:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:45:45.736 03:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:45.736 03:56:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:45.736 03:56:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:45.736 03:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:45:45.736 03:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:45:45.736 03:56:46 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:45:45.736 03:56:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:51.004 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:45:51.004 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:45:51.004 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:45:51.004 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:45:51.004 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:45:51.004 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:45:51.004 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:45:51.004 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:45:51.004 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:45:51.004 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:45:51.004 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:45:51.004 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:45:51.004 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:45:51.004 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:45:51.004 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:45:51.004 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:45:51.004 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:45:51.004 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:45:51.004 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:45:51.004 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:45:51.004 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:45:51.005 Found 0000:af:00.0 (0x8086 - 0x159b) 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:45:51.005 Found 0000:af:00.1 (0x8086 - 0x159b) 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:45:51.005 Found net devices under 0000:af:00.0: cvl_0_0 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:45:51.005 Found net devices under 0000:af:00.1: cvl_0_1 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:45:51.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:51.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:45:51.005 00:45:51.005 --- 10.0.0.2 ping statistics --- 00:45:51.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:51.005 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:45:51.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:51.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:45:51.005 00:45:51.005 --- 10.0.0.1 ping statistics --- 00:45:51.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:51.005 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:45:51.005 03:56:51 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:45:53.538 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:45:53.538 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:45:53.538 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:45:53.538 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:45:53.538 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:45:53.538 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:45:53.538 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:45:53.538 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:45:53.538 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:45:53.538 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:45:53.538 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:45:53.538 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:45:53.538 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:45:53.538 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:45:53.538 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:45:53.538 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:45:54.113 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:45:54.371 03:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:54.371 03:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:45:54.371 03:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:45:54.371 03:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:54.371 03:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:45:54.371 03:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:45:54.371 03:56:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:45:54.371 03:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:45:54.371 03:56:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:54.371 03:56:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:54.371 03:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3033674 00:45:54.371 03:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:45:54.371 03:56:55 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3033674 00:45:54.371 03:56:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 3033674 ']' 00:45:54.371 03:56:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:54.371 03:56:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:54.371 03:56:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:54.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:54.371 03:56:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:54.371 03:56:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:54.371 [2024-12-13 03:56:55.527835] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:45:54.371 [2024-12-13 03:56:55.527934] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:54.629 [2024-12-13 03:56:55.647750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:45:54.629 [2024-12-13 03:56:55.760475] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:54.629 [2024-12-13 03:56:55.760521] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:54.629 [2024-12-13 03:56:55.760532] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:54.629 [2024-12-13 03:56:55.760542] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:54.629 [2024-12-13 03:56:55.760549] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:54.629 [2024-12-13 03:56:55.762764] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:45:54.629 [2024-12-13 03:56:55.762783] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:45:54.629 [2024-12-13 03:56:55.762856] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:45:54.629 [2024-12-13 03:56:55.762865] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:45:55.196 03:56:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:55.196 03:56:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:45:55.196 03:56:56 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:45:55.196 03:56:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:55.196 03:56:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:55.196 03:56:56 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:55.196 03:56:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:45:55.196 03:56:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:45:55.196 03:56:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:45:55.196 03:56:56 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:45:55.196 03:56:56 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:45:55.196 03:56:56 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:45:55.196 03:56:56 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:45:55.196 03:56:56 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:45:55.196 03:56:56 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:45:55.196 03:56:56 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:45:55.196 03:56:56 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:45:55.196 03:56:56 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:45:55.196 03:56:56 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:45:55.196 03:56:56 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:45:55.196 03:56:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:45:55.196 03:56:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:45:55.196 03:56:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:45:55.196 03:56:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:55.196 03:56:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:55.196 03:56:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:55.454 ************************************ 00:45:55.454 START TEST spdk_target_abort 00:45:55.454 ************************************ 00:45:55.454 03:56:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:45:55.454 03:56:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:45:55.454 03:56:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:45:55.454 03:56:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:55.454 03:56:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:58.737 spdk_targetn1 00:45:58.737 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:58.737 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:45:58.737 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:58.737 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:58.737 [2024-12-13 03:56:59.320206] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:58.737 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:58.737 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:45:58.737 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:58.737 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:58.737 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:58.737 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:45:58.737 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:58.737 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:58.737 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:58.737 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:45:58.737 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:58.737 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:45:58.737 [2024-12-13 03:56:59.374981] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:58.737 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:58.737 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:45:58.738 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:45:58.738 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:45:58.738 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:45:58.738 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:45:58.738 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:45:58.738 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:45:58.738 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:45:58.738 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:45:58.738 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:58.738 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:45:58.738 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:58.738 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:45:58.738 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:58.738 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:45:58.738 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:58.738 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:45:58.738 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:45:58.738 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:45:58.738 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:45:58.738 03:56:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:02.021 Initializing NVMe Controllers 00:46:02.021 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:02.021 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:02.021 Initialization complete. Launching workers. 00:46:02.021 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13094, failed: 0 00:46:02.021 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1247, failed to submit 11847 00:46:02.021 success 686, unsuccessful 561, failed 0 00:46:02.021 03:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:02.021 03:57:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:05.455 Initializing NVMe Controllers 00:46:05.455 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:05.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:05.455 Initialization complete. Launching workers. 00:46:05.455 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8480, failed: 0 00:46:05.455 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1267, failed to submit 7213 00:46:05.455 success 292, unsuccessful 975, failed 0 00:46:05.455 03:57:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:05.455 03:57:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:08.775 Initializing NVMe Controllers 00:46:08.775 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:08.775 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:08.775 Initialization complete. Launching workers. 00:46:08.775 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33962, failed: 0 00:46:08.775 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2726, failed to submit 31236 00:46:08.775 success 564, unsuccessful 2162, failed 0 00:46:08.775 03:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:46:08.775 03:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:08.775 03:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:08.775 03:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:08.775 03:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:46:08.775 03:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:08.775 03:57:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:09.708 03:57:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:09.708 03:57:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3033674 00:46:09.708 03:57:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 3033674 ']' 00:46:09.708 03:57:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 3033674 00:46:09.708 03:57:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:46:09.708 03:57:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:09.708 03:57:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3033674 00:46:09.708 03:57:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:09.708 03:57:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:09.708 03:57:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3033674' 00:46:09.708 killing process with pid 3033674 00:46:09.708 03:57:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 3033674 00:46:09.708 03:57:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 3033674 00:46:10.643 00:46:10.643 real 0m15.257s 00:46:10.643 user 0m59.784s 00:46:10.643 sys 0m2.630s 00:46:10.643 03:57:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:10.643 03:57:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:10.643 ************************************ 00:46:10.643 END TEST spdk_target_abort 00:46:10.643 ************************************ 00:46:10.643 03:57:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:46:10.643 03:57:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:10.643 03:57:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:10.643 03:57:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:10.643 ************************************ 00:46:10.643 START TEST kernel_target_abort 00:46:10.643 ************************************ 00:46:10.643 03:57:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:46:10.643 03:57:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:46:10.643 03:57:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:46:10.643 03:57:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:46:10.643 03:57:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:46:10.643 03:57:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:46:10.643 03:57:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:46:10.643 03:57:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:46:10.643 03:57:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:46:10.643 03:57:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:46:10.643 03:57:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:46:10.644 03:57:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:46:10.644 03:57:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:46:10.644 03:57:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:46:10.644 03:57:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:46:10.644 03:57:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:10.644 03:57:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:10.644 03:57:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:46:10.644 03:57:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:46:10.644 03:57:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:46:10.644 03:57:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:46:10.644 03:57:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:46:10.644 03:57:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:46:13.172 Waiting for block devices as requested 00:46:13.172 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:46:13.172 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:46:13.172 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:46:13.172 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:46:13.172 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:46:13.431 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:46:13.431 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:46:13.431 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:46:13.431 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:46:13.689 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:46:13.689 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:46:13.689 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:46:13.948 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:46:13.948 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:46:13.948 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:46:13.948 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:46:14.211 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:46:14.780 No valid GPT data, bailing 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:46:14.780 00:46:14.780 Discovery Log Number of Records 2, Generation counter 2 00:46:14.780 =====Discovery Log Entry 0====== 00:46:14.780 trtype: tcp 00:46:14.780 adrfam: ipv4 00:46:14.780 subtype: current discovery subsystem 00:46:14.780 treq: not specified, sq flow control disable supported 00:46:14.780 portid: 1 00:46:14.780 trsvcid: 4420 00:46:14.780 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:46:14.780 traddr: 10.0.0.1 00:46:14.780 eflags: none 00:46:14.780 sectype: none 00:46:14.780 =====Discovery Log Entry 1====== 00:46:14.780 trtype: tcp 00:46:14.780 adrfam: ipv4 00:46:14.780 subtype: nvme subsystem 00:46:14.780 treq: not specified, sq flow control disable supported 00:46:14.780 portid: 1 00:46:14.780 trsvcid: 4420 00:46:14.780 subnqn: nqn.2016-06.io.spdk:testnqn 00:46:14.780 traddr: 10.0.0.1 00:46:14.780 eflags: none 00:46:14.780 sectype: none 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:14.780 03:57:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:18.064 Initializing NVMe Controllers 00:46:18.064 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:18.064 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:18.064 Initialization complete. Launching workers. 00:46:18.064 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 80629, failed: 0 00:46:18.064 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 80629, failed to submit 0 00:46:18.064 success 0, unsuccessful 80629, failed 0 00:46:18.064 03:57:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:18.064 03:57:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:21.351 Initializing NVMe Controllers 00:46:21.351 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:21.351 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:21.351 Initialization complete. Launching workers. 00:46:21.351 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 125213, failed: 0 00:46:21.351 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31414, failed to submit 93799 00:46:21.351 success 0, unsuccessful 31414, failed 0 00:46:21.351 03:57:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:21.351 03:57:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:24.634 Initializing NVMe Controllers 00:46:24.634 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:24.634 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:24.634 Initialization complete. Launching workers. 00:46:24.634 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 120326, failed: 0 00:46:24.634 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30098, failed to submit 90228 00:46:24.634 success 0, unsuccessful 30098, failed 0 00:46:24.634 03:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:46:24.634 03:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:46:24.634 03:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:46:24.634 03:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:24.634 03:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:24.634 03:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:46:24.634 03:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:24.634 03:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:46:24.634 03:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:46:24.634 03:57:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:46:27.165 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:46:27.165 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:46:27.165 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:46:27.165 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:46:27.165 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:46:27.165 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:46:27.165 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:46:27.165 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:46:27.165 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:46:27.165 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:46:27.165 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:46:27.165 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:46:27.165 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:46:27.165 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:46:27.165 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:46:27.165 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:46:27.732 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:46:27.991 00:46:27.991 real 0m17.224s 00:46:27.991 user 0m9.058s 00:46:27.991 sys 0m4.944s 00:46:27.991 03:57:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:27.991 03:57:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:27.991 ************************************ 00:46:27.991 END TEST kernel_target_abort 00:46:27.991 ************************************ 00:46:27.991 03:57:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:46:27.991 03:57:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:46:27.991 03:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:46:27.991 03:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:46:27.991 03:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:27.991 03:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:46:27.991 03:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:27.991 03:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:27.991 rmmod nvme_tcp 00:46:27.991 rmmod nvme_fabrics 00:46:27.991 rmmod nvme_keyring 00:46:27.991 03:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:27.991 03:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:46:27.991 03:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:46:27.991 03:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3033674 ']' 00:46:27.991 03:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3033674 00:46:27.991 03:57:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 3033674 ']' 00:46:27.991 03:57:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 3033674 00:46:27.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3033674) - No such process 00:46:27.991 03:57:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 3033674 is not found' 00:46:27.991 Process with pid 3033674 is not found 00:46:27.991 03:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:46:27.991 03:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:46:30.523 Waiting for block devices as requested 00:46:30.523 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:46:30.523 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:46:30.781 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:46:30.781 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:46:30.781 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:46:30.781 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:46:31.040 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:46:31.040 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:46:31.040 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:46:31.299 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:46:31.299 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:46:31.299 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:46:31.299 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:46:31.558 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:46:31.558 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:46:31.558 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:46:31.817 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:46:31.817 03:57:32 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:46:31.817 03:57:32 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:46:31.817 03:57:32 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:46:31.817 03:57:32 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:46:31.817 03:57:32 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:46:31.817 03:57:32 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:46:31.817 03:57:32 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:31.817 03:57:32 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:46:31.817 03:57:32 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:31.817 03:57:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:31.817 03:57:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:34.350 03:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:46:34.350 00:46:34.350 real 0m48.495s 00:46:34.350 user 1m13.004s 00:46:34.350 sys 0m15.420s 00:46:34.350 03:57:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:34.350 03:57:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:34.350 ************************************ 00:46:34.350 END TEST nvmf_abort_qd_sizes 00:46:34.350 ************************************ 00:46:34.350 03:57:34 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:46:34.350 03:57:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:34.350 03:57:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:34.350 03:57:34 -- common/autotest_common.sh@10 -- # set +x 00:46:34.350 ************************************ 00:46:34.350 START TEST keyring_file 00:46:34.350 ************************************ 00:46:34.350 03:57:35 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:46:34.350 * Looking for test storage... 00:46:34.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:46:34.350 03:57:35 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:46:34.350 03:57:35 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:46:34.350 03:57:35 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:46:34.350 03:57:35 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:46:34.350 03:57:35 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:34.350 03:57:35 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:34.350 03:57:35 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:34.350 03:57:35 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:46:34.350 03:57:35 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:46:34.350 03:57:35 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:46:34.350 03:57:35 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:46:34.350 03:57:35 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:46:34.350 03:57:35 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:46:34.350 03:57:35 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:46:34.350 03:57:35 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:34.350 03:57:35 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:46:34.350 03:57:35 keyring_file -- scripts/common.sh@345 -- # : 1 00:46:34.350 03:57:35 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:34.350 03:57:35 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:34.350 03:57:35 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:46:34.350 03:57:35 keyring_file -- scripts/common.sh@353 -- # local d=1 00:46:34.350 03:57:35 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:34.350 03:57:35 keyring_file -- scripts/common.sh@355 -- # echo 1 00:46:34.350 03:57:35 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:46:34.350 03:57:35 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:46:34.350 03:57:35 keyring_file -- scripts/common.sh@353 -- # local d=2 00:46:34.350 03:57:35 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:34.350 03:57:35 keyring_file -- scripts/common.sh@355 -- # echo 2 00:46:34.350 03:57:35 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:46:34.350 03:57:35 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:34.350 03:57:35 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:34.350 03:57:35 keyring_file -- scripts/common.sh@368 -- # return 0 00:46:34.350 03:57:35 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:34.350 03:57:35 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:46:34.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:34.350 --rc genhtml_branch_coverage=1 00:46:34.350 --rc genhtml_function_coverage=1 00:46:34.350 --rc genhtml_legend=1 00:46:34.350 --rc geninfo_all_blocks=1 00:46:34.350 --rc geninfo_unexecuted_blocks=1 00:46:34.350 00:46:34.350 ' 00:46:34.350 03:57:35 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:46:34.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:34.350 --rc genhtml_branch_coverage=1 00:46:34.350 --rc genhtml_function_coverage=1 00:46:34.350 --rc genhtml_legend=1 00:46:34.350 --rc geninfo_all_blocks=1 00:46:34.350 --rc geninfo_unexecuted_blocks=1 00:46:34.350 00:46:34.350 ' 00:46:34.350 03:57:35 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:46:34.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:34.350 --rc genhtml_branch_coverage=1 00:46:34.350 --rc genhtml_function_coverage=1 00:46:34.350 --rc genhtml_legend=1 00:46:34.350 --rc geninfo_all_blocks=1 00:46:34.350 --rc geninfo_unexecuted_blocks=1 00:46:34.350 00:46:34.350 ' 00:46:34.350 03:57:35 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:46:34.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:34.350 --rc genhtml_branch_coverage=1 00:46:34.350 --rc genhtml_function_coverage=1 00:46:34.350 --rc genhtml_legend=1 00:46:34.350 --rc geninfo_all_blocks=1 00:46:34.350 --rc geninfo_unexecuted_blocks=1 00:46:34.350 00:46:34.350 ' 00:46:34.350 03:57:35 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:46:34.350 03:57:35 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:34.350 03:57:35 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:46:34.350 03:57:35 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:34.350 03:57:35 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:34.350 03:57:35 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:34.350 03:57:35 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:34.350 03:57:35 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:34.350 03:57:35 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:34.350 03:57:35 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:34.350 03:57:35 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:34.350 03:57:35 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:34.350 03:57:35 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:34.350 03:57:35 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:46:34.350 03:57:35 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:46:34.350 03:57:35 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:34.350 03:57:35 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:34.350 03:57:35 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:34.350 03:57:35 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:34.351 03:57:35 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:34.351 03:57:35 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:46:34.351 03:57:35 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:34.351 03:57:35 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:34.351 03:57:35 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:34.351 03:57:35 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:34.351 03:57:35 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:34.351 03:57:35 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:34.351 03:57:35 keyring_file -- paths/export.sh@5 -- # export PATH 00:46:34.351 03:57:35 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:34.351 03:57:35 keyring_file -- nvmf/common.sh@51 -- # : 0 00:46:34.351 03:57:35 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:34.351 03:57:35 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:34.351 03:57:35 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:34.351 03:57:35 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:34.351 03:57:35 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:34.351 03:57:35 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:34.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:34.351 03:57:35 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:34.351 03:57:35 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:34.351 03:57:35 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:34.351 03:57:35 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:46:34.351 03:57:35 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:46:34.351 03:57:35 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:46:34.351 03:57:35 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:46:34.351 03:57:35 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:46:34.351 03:57:35 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:46:34.351 03:57:35 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:46:34.351 03:57:35 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:46:34.351 03:57:35 keyring_file -- keyring/common.sh@17 -- # name=key0 00:46:34.351 03:57:35 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:46:34.351 03:57:35 keyring_file -- keyring/common.sh@17 -- # digest=0 00:46:34.351 03:57:35 keyring_file -- keyring/common.sh@18 -- # mktemp 00:46:34.351 03:57:35 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.5k4M6I1NtB 00:46:34.351 03:57:35 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:46:34.351 03:57:35 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:46:34.351 03:57:35 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:46:34.351 03:57:35 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:46:34.351 03:57:35 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:46:34.351 03:57:35 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:46:34.351 03:57:35 keyring_file -- nvmf/common.sh@733 -- # python - 00:46:34.351 03:57:35 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5k4M6I1NtB 00:46:34.351 03:57:35 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.5k4M6I1NtB 00:46:34.351 03:57:35 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.5k4M6I1NtB 00:46:34.351 03:57:35 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:46:34.351 03:57:35 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:46:34.351 03:57:35 keyring_file -- keyring/common.sh@17 -- # name=key1 00:46:34.351 03:57:35 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:46:34.351 03:57:35 keyring_file -- keyring/common.sh@17 -- # digest=0 00:46:34.351 03:57:35 keyring_file -- keyring/common.sh@18 -- # mktemp 00:46:34.351 03:57:35 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.wioA8dDuuS 00:46:34.351 03:57:35 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:46:34.351 03:57:35 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:46:34.351 03:57:35 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:46:34.351 03:57:35 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:46:34.351 03:57:35 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:46:34.351 03:57:35 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:46:34.351 03:57:35 keyring_file -- nvmf/common.sh@733 -- # python - 00:46:34.351 03:57:35 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.wioA8dDuuS 00:46:34.351 03:57:35 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.wioA8dDuuS 00:46:34.351 03:57:35 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.wioA8dDuuS 00:46:34.351 03:57:35 keyring_file -- keyring/file.sh@30 -- # tgtpid=3043198 00:46:34.351 03:57:35 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:46:34.351 03:57:35 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3043198 00:46:34.351 03:57:35 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3043198 ']' 00:46:34.351 03:57:35 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:34.351 03:57:35 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:34.351 03:57:35 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:34.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:34.351 03:57:35 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:34.351 03:57:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:34.351 [2024-12-13 03:57:35.404027] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:46:34.351 [2024-12-13 03:57:35.404135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3043198 ] 00:46:34.351 [2024-12-13 03:57:35.517233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:34.609 [2024-12-13 03:57:35.624234] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:46:35.544 03:57:36 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:35.544 03:57:36 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:46:35.544 03:57:36 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:46:35.544 03:57:36 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:35.544 03:57:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:35.544 [2024-12-13 03:57:36.440874] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:35.544 null0 00:46:35.544 [2024-12-13 03:57:36.472904] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:35.544 [2024-12-13 03:57:36.473269] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:46:35.544 03:57:36 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:35.544 03:57:36 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:46:35.544 03:57:36 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:46:35.544 03:57:36 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:46:35.544 03:57:36 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:46:35.544 03:57:36 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:35.544 03:57:36 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:46:35.544 03:57:36 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:35.544 03:57:36 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:46:35.544 03:57:36 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:35.544 03:57:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:35.544 [2024-12-13 03:57:36.500970] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:46:35.544 request: 00:46:35.544 { 00:46:35.544 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:46:35.544 "secure_channel": false, 00:46:35.544 "listen_address": { 00:46:35.544 "trtype": "tcp", 00:46:35.544 "traddr": "127.0.0.1", 00:46:35.544 "trsvcid": "4420" 00:46:35.544 }, 00:46:35.544 "method": "nvmf_subsystem_add_listener", 00:46:35.544 "req_id": 1 00:46:35.544 } 00:46:35.544 Got JSON-RPC error response 00:46:35.544 response: 00:46:35.544 { 00:46:35.544 "code": -32602, 00:46:35.544 "message": "Invalid parameters" 00:46:35.544 } 00:46:35.544 03:57:36 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:46:35.544 03:57:36 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:46:35.544 03:57:36 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:35.544 03:57:36 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:35.544 03:57:36 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:35.544 03:57:36 keyring_file -- keyring/file.sh@47 -- # bperfpid=3043426 00:46:35.544 03:57:36 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3043426 /var/tmp/bperf.sock 00:46:35.544 03:57:36 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:46:35.544 03:57:36 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3043426 ']' 00:46:35.544 03:57:36 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:46:35.544 03:57:36 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:35.544 03:57:36 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:46:35.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:46:35.544 03:57:36 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:35.544 03:57:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:35.544 [2024-12-13 03:57:36.580352] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:46:35.544 [2024-12-13 03:57:36.580443] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3043426 ] 00:46:35.544 [2024-12-13 03:57:36.691212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:35.803 [2024-12-13 03:57:36.803447] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:46:36.368 03:57:37 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:36.368 03:57:37 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:46:36.368 03:57:37 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5k4M6I1NtB 00:46:36.368 03:57:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5k4M6I1NtB 00:46:36.626 03:57:37 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.wioA8dDuuS 00:46:36.626 03:57:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.wioA8dDuuS 00:46:36.626 03:57:37 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:46:36.626 03:57:37 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:46:36.626 03:57:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:36.626 03:57:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:36.626 03:57:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:36.884 03:57:37 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.5k4M6I1NtB == \/\t\m\p\/\t\m\p\.\5\k\4\M\6\I\1\N\t\B ]] 00:46:36.884 03:57:37 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:46:36.884 03:57:37 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:46:36.884 03:57:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:36.884 03:57:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:36.884 03:57:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:37.142 03:57:38 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.wioA8dDuuS == \/\t\m\p\/\t\m\p\.\w\i\o\A\8\d\D\u\u\S ]] 00:46:37.142 03:57:38 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:46:37.142 03:57:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:37.142 03:57:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:37.142 03:57:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:37.142 03:57:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:37.142 03:57:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:37.142 03:57:38 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:46:37.401 03:57:38 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:46:37.401 03:57:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:37.401 03:57:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:37.401 03:57:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:37.401 03:57:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:37.401 03:57:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:37.401 03:57:38 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:46:37.401 03:57:38 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:37.401 03:57:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:37.659 [2024-12-13 03:57:38.718460] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:37.659 nvme0n1 00:46:37.659 03:57:38 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:46:37.659 03:57:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:37.659 03:57:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:37.659 03:57:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:37.659 03:57:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:37.659 03:57:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:37.917 03:57:39 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:46:37.917 03:57:39 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:46:37.917 03:57:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:37.917 03:57:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:37.917 03:57:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:37.917 03:57:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:37.917 03:57:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:38.175 03:57:39 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:46:38.175 03:57:39 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:46:38.175 Running I/O for 1 seconds... 00:46:39.109 14781.00 IOPS, 57.74 MiB/s 00:46:39.109 Latency(us) 00:46:39.109 [2024-12-13T02:57:40.318Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:39.109 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:46:39.109 nvme0n1 : 1.01 14828.27 57.92 0.00 0.00 8613.07 5398.92 18599.74 00:46:39.109 [2024-12-13T02:57:40.318Z] =================================================================================================================== 00:46:39.109 [2024-12-13T02:57:40.318Z] Total : 14828.27 57.92 0.00 0.00 8613.07 5398.92 18599.74 00:46:39.109 { 00:46:39.109 "results": [ 00:46:39.109 { 00:46:39.109 "job": "nvme0n1", 00:46:39.109 "core_mask": "0x2", 00:46:39.109 "workload": "randrw", 00:46:39.109 "percentage": 50, 00:46:39.109 "status": "finished", 00:46:39.109 "queue_depth": 128, 00:46:39.109 "io_size": 4096, 00:46:39.109 "runtime": 1.005444, 00:46:39.109 "iops": 14828.274871599015, 00:46:39.109 "mibps": 57.92294871718365, 00:46:39.109 "io_failed": 0, 00:46:39.109 "io_timeout": 0, 00:46:39.109 "avg_latency_us": 8613.068417989773, 00:46:39.109 "min_latency_us": 5398.918095238095, 00:46:39.109 "max_latency_us": 18599.74095238095 00:46:39.109 } 00:46:39.109 ], 00:46:39.109 "core_count": 1 00:46:39.109 } 00:46:39.367 03:57:40 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:46:39.367 03:57:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:46:39.367 03:57:40 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:46:39.367 03:57:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:39.367 03:57:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:39.367 03:57:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:39.367 03:57:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:39.367 03:57:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:39.625 03:57:40 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:46:39.625 03:57:40 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:46:39.625 03:57:40 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:39.625 03:57:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:39.625 03:57:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:39.625 03:57:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:39.625 03:57:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:39.883 03:57:40 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:46:39.883 03:57:40 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:39.883 03:57:40 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:46:39.883 03:57:40 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:39.883 03:57:40 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:46:39.883 03:57:40 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:39.883 03:57:40 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:46:39.883 03:57:40 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:39.883 03:57:40 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:39.883 03:57:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:40.141 [2024-12-13 03:57:41.108944] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:46:40.141 [2024-12-13 03:57:41.109324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032ad00 (107): Transport endpoint is not connected 00:46:40.141 [2024-12-13 03:57:41.110307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032ad00 (9): Bad file descriptor 00:46:40.141 [2024-12-13 03:57:41.111304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:46:40.141 [2024-12-13 03:57:41.111324] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:46:40.141 [2024-12-13 03:57:41.111336] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:46:40.141 [2024-12-13 03:57:41.111348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:46:40.141 request: 00:46:40.141 { 00:46:40.141 "name": "nvme0", 00:46:40.141 "trtype": "tcp", 00:46:40.141 "traddr": "127.0.0.1", 00:46:40.141 "adrfam": "ipv4", 00:46:40.141 "trsvcid": "4420", 00:46:40.141 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:40.141 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:40.141 "prchk_reftag": false, 00:46:40.141 "prchk_guard": false, 00:46:40.141 "hdgst": false, 00:46:40.141 "ddgst": false, 00:46:40.141 "psk": "key1", 00:46:40.141 "allow_unrecognized_csi": false, 00:46:40.141 "method": "bdev_nvme_attach_controller", 00:46:40.141 "req_id": 1 00:46:40.141 } 00:46:40.141 Got JSON-RPC error response 00:46:40.141 response: 00:46:40.141 { 00:46:40.141 "code": -5, 00:46:40.141 "message": "Input/output error" 00:46:40.141 } 00:46:40.141 03:57:41 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:46:40.141 03:57:41 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:40.141 03:57:41 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:40.141 03:57:41 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:40.141 03:57:41 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:46:40.141 03:57:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:40.141 03:57:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:40.141 03:57:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:40.141 03:57:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:40.141 03:57:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:40.141 03:57:41 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:46:40.141 03:57:41 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:46:40.141 03:57:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:40.141 03:57:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:40.141 03:57:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:40.141 03:57:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:40.141 03:57:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:40.399 03:57:41 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:46:40.399 03:57:41 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:46:40.399 03:57:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:46:40.657 03:57:41 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:46:40.657 03:57:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:46:40.915 03:57:41 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:46:40.915 03:57:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:40.915 03:57:41 keyring_file -- keyring/file.sh@78 -- # jq length 00:46:40.915 03:57:42 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:46:40.915 03:57:42 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.5k4M6I1NtB 00:46:40.915 03:57:42 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.5k4M6I1NtB 00:46:40.915 03:57:42 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:46:40.915 03:57:42 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.5k4M6I1NtB 00:46:40.915 03:57:42 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:46:40.915 03:57:42 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:40.915 03:57:42 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:46:40.915 03:57:42 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:40.915 03:57:42 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5k4M6I1NtB 00:46:40.915 03:57:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5k4M6I1NtB 00:46:41.173 [2024-12-13 03:57:42.270882] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.5k4M6I1NtB': 0100660 00:46:41.173 [2024-12-13 03:57:42.270915] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:46:41.173 request: 00:46:41.173 { 00:46:41.173 "name": "key0", 00:46:41.173 "path": "/tmp/tmp.5k4M6I1NtB", 00:46:41.173 "method": "keyring_file_add_key", 00:46:41.173 "req_id": 1 00:46:41.173 } 00:46:41.173 Got JSON-RPC error response 00:46:41.173 response: 00:46:41.173 { 00:46:41.173 "code": -1, 00:46:41.173 "message": "Operation not permitted" 00:46:41.173 } 00:46:41.173 03:57:42 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:46:41.173 03:57:42 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:41.173 03:57:42 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:41.173 03:57:42 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:41.173 03:57:42 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.5k4M6I1NtB 00:46:41.173 03:57:42 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5k4M6I1NtB 00:46:41.173 03:57:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5k4M6I1NtB 00:46:41.430 03:57:42 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.5k4M6I1NtB 00:46:41.430 03:57:42 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:46:41.430 03:57:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:41.430 03:57:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:41.430 03:57:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:41.430 03:57:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:41.430 03:57:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:41.688 03:57:42 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:46:41.689 03:57:42 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:41.689 03:57:42 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:46:41.689 03:57:42 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:41.689 03:57:42 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:46:41.689 03:57:42 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:41.689 03:57:42 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:46:41.689 03:57:42 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:41.689 03:57:42 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:41.689 03:57:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:41.689 [2024-12-13 03:57:42.840467] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.5k4M6I1NtB': No such file or directory 00:46:41.689 [2024-12-13 03:57:42.840502] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:46:41.689 [2024-12-13 03:57:42.840523] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:46:41.689 [2024-12-13 03:57:42.840535] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:46:41.689 [2024-12-13 03:57:42.840547] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:46:41.689 [2024-12-13 03:57:42.840557] bdev_nvme.c:6801:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:46:41.689 request: 00:46:41.689 { 00:46:41.689 "name": "nvme0", 00:46:41.689 "trtype": "tcp", 00:46:41.689 "traddr": "127.0.0.1", 00:46:41.689 "adrfam": "ipv4", 00:46:41.689 "trsvcid": "4420", 00:46:41.689 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:41.689 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:41.689 "prchk_reftag": false, 00:46:41.689 "prchk_guard": false, 00:46:41.689 "hdgst": false, 00:46:41.689 "ddgst": false, 00:46:41.689 "psk": "key0", 00:46:41.689 "allow_unrecognized_csi": false, 00:46:41.689 "method": "bdev_nvme_attach_controller", 00:46:41.689 "req_id": 1 00:46:41.689 } 00:46:41.689 Got JSON-RPC error response 00:46:41.689 response: 00:46:41.689 { 00:46:41.689 "code": -19, 00:46:41.689 "message": "No such device" 00:46:41.689 } 00:46:41.689 03:57:42 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:46:41.689 03:57:42 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:41.689 03:57:42 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:41.689 03:57:42 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:41.689 03:57:42 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:46:41.689 03:57:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:46:41.947 03:57:43 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:46:41.947 03:57:43 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:46:41.947 03:57:43 keyring_file -- keyring/common.sh@17 -- # name=key0 00:46:41.947 03:57:43 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:46:41.947 03:57:43 keyring_file -- keyring/common.sh@17 -- # digest=0 00:46:41.947 03:57:43 keyring_file -- keyring/common.sh@18 -- # mktemp 00:46:41.947 03:57:43 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.jWXAPLSc8c 00:46:41.947 03:57:43 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:46:41.947 03:57:43 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:46:41.947 03:57:43 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:46:41.947 03:57:43 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:46:41.947 03:57:43 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:46:41.947 03:57:43 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:46:41.947 03:57:43 keyring_file -- nvmf/common.sh@733 -- # python - 00:46:41.947 03:57:43 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.jWXAPLSc8c 00:46:41.947 03:57:43 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.jWXAPLSc8c 00:46:41.947 03:57:43 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.jWXAPLSc8c 00:46:41.947 03:57:43 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.jWXAPLSc8c 00:46:41.947 03:57:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.jWXAPLSc8c 00:46:42.205 03:57:43 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:42.205 03:57:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:42.463 nvme0n1 00:46:42.463 03:57:43 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:46:42.463 03:57:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:42.463 03:57:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:42.463 03:57:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:42.463 03:57:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:42.463 03:57:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:42.721 03:57:43 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:46:42.721 03:57:43 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:46:42.721 03:57:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:46:42.982 03:57:43 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:46:42.982 03:57:43 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:46:42.982 03:57:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:42.982 03:57:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:42.982 03:57:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:42.982 03:57:44 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:46:42.982 03:57:44 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:46:42.982 03:57:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:42.982 03:57:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:42.982 03:57:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:42.982 03:57:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:42.982 03:57:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:43.240 03:57:44 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:46:43.240 03:57:44 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:46:43.240 03:57:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:46:43.497 03:57:44 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:46:43.497 03:57:44 keyring_file -- keyring/file.sh@105 -- # jq length 00:46:43.497 03:57:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:43.755 03:57:44 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:46:43.755 03:57:44 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.jWXAPLSc8c 00:46:43.755 03:57:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.jWXAPLSc8c 00:46:43.755 03:57:44 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.wioA8dDuuS 00:46:43.755 03:57:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.wioA8dDuuS 00:46:44.013 03:57:45 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:44.013 03:57:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:44.270 nvme0n1 00:46:44.270 03:57:45 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:46:44.270 03:57:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:46:44.529 03:57:45 keyring_file -- keyring/file.sh@113 -- # config='{ 00:46:44.529 "subsystems": [ 00:46:44.529 { 00:46:44.529 "subsystem": "keyring", 00:46:44.529 "config": [ 00:46:44.529 { 00:46:44.529 "method": "keyring_file_add_key", 00:46:44.529 "params": { 00:46:44.529 "name": "key0", 00:46:44.529 "path": "/tmp/tmp.jWXAPLSc8c" 00:46:44.529 } 00:46:44.529 }, 00:46:44.529 { 00:46:44.529 "method": "keyring_file_add_key", 00:46:44.529 "params": { 00:46:44.529 "name": "key1", 00:46:44.529 "path": "/tmp/tmp.wioA8dDuuS" 00:46:44.529 } 00:46:44.529 } 00:46:44.529 ] 00:46:44.529 }, 00:46:44.529 { 00:46:44.529 "subsystem": "iobuf", 00:46:44.529 "config": [ 00:46:44.529 { 00:46:44.529 "method": "iobuf_set_options", 00:46:44.529 "params": { 00:46:44.529 "small_pool_count": 8192, 00:46:44.529 "large_pool_count": 1024, 00:46:44.529 "small_bufsize": 8192, 00:46:44.529 "large_bufsize": 135168, 00:46:44.529 "enable_numa": false 00:46:44.529 } 00:46:44.529 } 00:46:44.529 ] 00:46:44.529 }, 00:46:44.529 { 00:46:44.529 "subsystem": "sock", 00:46:44.529 "config": [ 00:46:44.529 { 00:46:44.529 "method": "sock_set_default_impl", 00:46:44.529 "params": { 00:46:44.529 "impl_name": "posix" 00:46:44.529 } 00:46:44.529 }, 00:46:44.529 { 00:46:44.529 "method": "sock_impl_set_options", 00:46:44.529 "params": { 00:46:44.529 "impl_name": "ssl", 00:46:44.529 "recv_buf_size": 4096, 00:46:44.529 "send_buf_size": 4096, 00:46:44.529 "enable_recv_pipe": true, 00:46:44.529 "enable_quickack": false, 00:46:44.529 "enable_placement_id": 0, 00:46:44.529 "enable_zerocopy_send_server": true, 00:46:44.529 "enable_zerocopy_send_client": false, 00:46:44.529 "zerocopy_threshold": 0, 00:46:44.529 "tls_version": 0, 00:46:44.529 "enable_ktls": false 00:46:44.529 } 00:46:44.529 }, 00:46:44.529 { 00:46:44.529 "method": "sock_impl_set_options", 00:46:44.529 "params": { 00:46:44.529 "impl_name": "posix", 00:46:44.529 "recv_buf_size": 2097152, 00:46:44.529 "send_buf_size": 2097152, 00:46:44.529 "enable_recv_pipe": true, 00:46:44.529 "enable_quickack": false, 00:46:44.529 "enable_placement_id": 0, 00:46:44.529 "enable_zerocopy_send_server": true, 00:46:44.529 "enable_zerocopy_send_client": false, 00:46:44.529 "zerocopy_threshold": 0, 00:46:44.529 "tls_version": 0, 00:46:44.529 "enable_ktls": false 00:46:44.529 } 00:46:44.529 } 00:46:44.529 ] 00:46:44.529 }, 00:46:44.529 { 00:46:44.529 "subsystem": "vmd", 00:46:44.529 "config": [] 00:46:44.529 }, 00:46:44.529 { 00:46:44.529 "subsystem": "accel", 00:46:44.529 "config": [ 00:46:44.529 { 00:46:44.529 "method": "accel_set_options", 00:46:44.529 "params": { 00:46:44.529 "small_cache_size": 128, 00:46:44.529 "large_cache_size": 16, 00:46:44.529 "task_count": 2048, 00:46:44.529 "sequence_count": 2048, 00:46:44.529 "buf_count": 2048 00:46:44.529 } 00:46:44.529 } 00:46:44.529 ] 00:46:44.529 }, 00:46:44.529 { 00:46:44.529 "subsystem": "bdev", 00:46:44.529 "config": [ 00:46:44.529 { 00:46:44.529 "method": "bdev_set_options", 00:46:44.529 "params": { 00:46:44.529 "bdev_io_pool_size": 65535, 00:46:44.529 "bdev_io_cache_size": 256, 00:46:44.529 "bdev_auto_examine": true, 00:46:44.529 "iobuf_small_cache_size": 128, 00:46:44.529 "iobuf_large_cache_size": 16 00:46:44.529 } 00:46:44.529 }, 00:46:44.529 { 00:46:44.529 "method": "bdev_raid_set_options", 00:46:44.529 "params": { 00:46:44.529 "process_window_size_kb": 1024, 00:46:44.529 "process_max_bandwidth_mb_sec": 0 00:46:44.529 } 00:46:44.529 }, 00:46:44.529 { 00:46:44.529 "method": "bdev_iscsi_set_options", 00:46:44.529 "params": { 00:46:44.529 "timeout_sec": 30 00:46:44.529 } 00:46:44.529 }, 00:46:44.529 { 00:46:44.529 "method": "bdev_nvme_set_options", 00:46:44.529 "params": { 00:46:44.529 "action_on_timeout": "none", 00:46:44.529 "timeout_us": 0, 00:46:44.529 "timeout_admin_us": 0, 00:46:44.529 "keep_alive_timeout_ms": 10000, 00:46:44.529 "arbitration_burst": 0, 00:46:44.529 "low_priority_weight": 0, 00:46:44.529 "medium_priority_weight": 0, 00:46:44.529 "high_priority_weight": 0, 00:46:44.529 "nvme_adminq_poll_period_us": 10000, 00:46:44.529 "nvme_ioq_poll_period_us": 0, 00:46:44.529 "io_queue_requests": 512, 00:46:44.529 "delay_cmd_submit": true, 00:46:44.529 "transport_retry_count": 4, 00:46:44.529 "bdev_retry_count": 3, 00:46:44.529 "transport_ack_timeout": 0, 00:46:44.529 "ctrlr_loss_timeout_sec": 0, 00:46:44.529 "reconnect_delay_sec": 0, 00:46:44.529 "fast_io_fail_timeout_sec": 0, 00:46:44.529 "disable_auto_failback": false, 00:46:44.529 "generate_uuids": false, 00:46:44.529 "transport_tos": 0, 00:46:44.529 "nvme_error_stat": false, 00:46:44.529 "rdma_srq_size": 0, 00:46:44.529 "io_path_stat": false, 00:46:44.529 "allow_accel_sequence": false, 00:46:44.529 "rdma_max_cq_size": 0, 00:46:44.529 "rdma_cm_event_timeout_ms": 0, 00:46:44.529 "dhchap_digests": [ 00:46:44.529 "sha256", 00:46:44.529 "sha384", 00:46:44.529 "sha512" 00:46:44.529 ], 00:46:44.529 "dhchap_dhgroups": [ 00:46:44.529 "null", 00:46:44.529 "ffdhe2048", 00:46:44.529 "ffdhe3072", 00:46:44.529 "ffdhe4096", 00:46:44.529 "ffdhe6144", 00:46:44.529 "ffdhe8192" 00:46:44.529 ], 00:46:44.529 "rdma_umr_per_io": false 00:46:44.529 } 00:46:44.529 }, 00:46:44.529 { 00:46:44.529 "method": "bdev_nvme_attach_controller", 00:46:44.529 "params": { 00:46:44.529 "name": "nvme0", 00:46:44.529 "trtype": "TCP", 00:46:44.529 "adrfam": "IPv4", 00:46:44.529 "traddr": "127.0.0.1", 00:46:44.529 "trsvcid": "4420", 00:46:44.529 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:44.529 "prchk_reftag": false, 00:46:44.529 "prchk_guard": false, 00:46:44.529 "ctrlr_loss_timeout_sec": 0, 00:46:44.529 "reconnect_delay_sec": 0, 00:46:44.529 "fast_io_fail_timeout_sec": 0, 00:46:44.529 "psk": "key0", 00:46:44.529 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:44.529 "hdgst": false, 00:46:44.529 "ddgst": false, 00:46:44.529 "multipath": "multipath" 00:46:44.529 } 00:46:44.529 }, 00:46:44.529 { 00:46:44.529 "method": "bdev_nvme_set_hotplug", 00:46:44.529 "params": { 00:46:44.529 "period_us": 100000, 00:46:44.529 "enable": false 00:46:44.529 } 00:46:44.529 }, 00:46:44.529 { 00:46:44.529 "method": "bdev_wait_for_examine" 00:46:44.529 } 00:46:44.529 ] 00:46:44.529 }, 00:46:44.529 { 00:46:44.529 "subsystem": "nbd", 00:46:44.529 "config": [] 00:46:44.529 } 00:46:44.529 ] 00:46:44.529 }' 00:46:44.529 03:57:45 keyring_file -- keyring/file.sh@115 -- # killprocess 3043426 00:46:44.529 03:57:45 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3043426 ']' 00:46:44.529 03:57:45 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3043426 00:46:44.529 03:57:45 keyring_file -- common/autotest_common.sh@959 -- # uname 00:46:44.529 03:57:45 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:44.529 03:57:45 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3043426 00:46:44.529 03:57:45 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:46:44.529 03:57:45 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:46:44.529 03:57:45 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3043426' 00:46:44.529 killing process with pid 3043426 00:46:44.529 03:57:45 keyring_file -- common/autotest_common.sh@973 -- # kill 3043426 00:46:44.529 Received shutdown signal, test time was about 1.000000 seconds 00:46:44.529 00:46:44.529 Latency(us) 00:46:44.529 [2024-12-13T02:57:45.738Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:44.529 [2024-12-13T02:57:45.739Z] =================================================================================================================== 00:46:44.530 [2024-12-13T02:57:45.739Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:44.530 03:57:45 keyring_file -- common/autotest_common.sh@978 -- # wait 3043426 00:46:45.464 03:57:46 keyring_file -- keyring/file.sh@118 -- # bperfpid=3045078 00:46:45.464 03:57:46 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3045078 /var/tmp/bperf.sock 00:46:45.464 03:57:46 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3045078 ']' 00:46:45.464 03:57:46 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:46:45.464 03:57:46 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:46:45.464 03:57:46 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:45.464 03:57:46 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:46:45.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:46:45.464 03:57:46 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:46:45.464 "subsystems": [ 00:46:45.464 { 00:46:45.464 "subsystem": "keyring", 00:46:45.464 "config": [ 00:46:45.464 { 00:46:45.464 "method": "keyring_file_add_key", 00:46:45.464 "params": { 00:46:45.464 "name": "key0", 00:46:45.464 "path": "/tmp/tmp.jWXAPLSc8c" 00:46:45.464 } 00:46:45.464 }, 00:46:45.464 { 00:46:45.464 "method": "keyring_file_add_key", 00:46:45.464 "params": { 00:46:45.464 "name": "key1", 00:46:45.464 "path": "/tmp/tmp.wioA8dDuuS" 00:46:45.464 } 00:46:45.464 } 00:46:45.464 ] 00:46:45.464 }, 00:46:45.464 { 00:46:45.464 "subsystem": "iobuf", 00:46:45.464 "config": [ 00:46:45.464 { 00:46:45.464 "method": "iobuf_set_options", 00:46:45.464 "params": { 00:46:45.464 "small_pool_count": 8192, 00:46:45.464 "large_pool_count": 1024, 00:46:45.464 "small_bufsize": 8192, 00:46:45.464 "large_bufsize": 135168, 00:46:45.464 "enable_numa": false 00:46:45.464 } 00:46:45.464 } 00:46:45.464 ] 00:46:45.464 }, 00:46:45.464 { 00:46:45.464 "subsystem": "sock", 00:46:45.464 "config": [ 00:46:45.464 { 00:46:45.464 "method": "sock_set_default_impl", 00:46:45.464 "params": { 00:46:45.464 "impl_name": "posix" 00:46:45.464 } 00:46:45.464 }, 00:46:45.464 { 00:46:45.464 "method": "sock_impl_set_options", 00:46:45.464 "params": { 00:46:45.464 "impl_name": "ssl", 00:46:45.464 "recv_buf_size": 4096, 00:46:45.464 "send_buf_size": 4096, 00:46:45.464 "enable_recv_pipe": true, 00:46:45.464 "enable_quickack": false, 00:46:45.464 "enable_placement_id": 0, 00:46:45.464 "enable_zerocopy_send_server": true, 00:46:45.464 "enable_zerocopy_send_client": false, 00:46:45.464 "zerocopy_threshold": 0, 00:46:45.464 "tls_version": 0, 00:46:45.464 "enable_ktls": false 00:46:45.464 } 00:46:45.464 }, 00:46:45.464 { 00:46:45.464 "method": "sock_impl_set_options", 00:46:45.464 "params": { 00:46:45.464 "impl_name": "posix", 00:46:45.464 "recv_buf_size": 2097152, 00:46:45.464 "send_buf_size": 2097152, 00:46:45.464 "enable_recv_pipe": true, 00:46:45.464 "enable_quickack": false, 00:46:45.464 "enable_placement_id": 0, 00:46:45.464 "enable_zerocopy_send_server": true, 00:46:45.464 "enable_zerocopy_send_client": false, 00:46:45.464 "zerocopy_threshold": 0, 00:46:45.464 "tls_version": 0, 00:46:45.464 "enable_ktls": false 00:46:45.464 } 00:46:45.464 } 00:46:45.464 ] 00:46:45.464 }, 00:46:45.464 { 00:46:45.464 "subsystem": "vmd", 00:46:45.464 "config": [] 00:46:45.464 }, 00:46:45.464 { 00:46:45.464 "subsystem": "accel", 00:46:45.464 "config": [ 00:46:45.464 { 00:46:45.464 "method": "accel_set_options", 00:46:45.464 "params": { 00:46:45.464 "small_cache_size": 128, 00:46:45.464 "large_cache_size": 16, 00:46:45.464 "task_count": 2048, 00:46:45.464 "sequence_count": 2048, 00:46:45.464 "buf_count": 2048 00:46:45.464 } 00:46:45.464 } 00:46:45.464 ] 00:46:45.464 }, 00:46:45.464 { 00:46:45.464 "subsystem": "bdev", 00:46:45.464 "config": [ 00:46:45.464 { 00:46:45.464 "method": "bdev_set_options", 00:46:45.464 "params": { 00:46:45.464 "bdev_io_pool_size": 65535, 00:46:45.464 "bdev_io_cache_size": 256, 00:46:45.464 "bdev_auto_examine": true, 00:46:45.464 "iobuf_small_cache_size": 128, 00:46:45.464 "iobuf_large_cache_size": 16 00:46:45.464 } 00:46:45.464 }, 00:46:45.464 { 00:46:45.464 "method": "bdev_raid_set_options", 00:46:45.464 "params": { 00:46:45.464 "process_window_size_kb": 1024, 00:46:45.464 "process_max_bandwidth_mb_sec": 0 00:46:45.464 } 00:46:45.464 }, 00:46:45.464 { 00:46:45.464 "method": "bdev_iscsi_set_options", 00:46:45.464 "params": { 00:46:45.464 "timeout_sec": 30 00:46:45.465 } 00:46:45.465 }, 00:46:45.465 { 00:46:45.465 "method": "bdev_nvme_set_options", 00:46:45.465 "params": { 00:46:45.465 "action_on_timeout": "none", 00:46:45.465 "timeout_us": 0, 00:46:45.465 "timeout_admin_us": 0, 00:46:45.465 "keep_alive_timeout_ms": 10000, 00:46:45.465 "arbitration_burst": 0, 00:46:45.465 "low_priority_weight": 0, 00:46:45.465 "medium_priority_weight": 0, 00:46:45.465 "high_priority_weight": 0, 00:46:45.465 "nvme_adminq_poll_period_us": 10000, 00:46:45.465 "nvme_ioq_poll_period_us": 0, 00:46:45.465 "io_queue_requests": 512, 00:46:45.465 "delay_cmd_submit": true, 00:46:45.465 "transport_retry_count": 4, 00:46:45.465 "bdev_retry_count": 3, 00:46:45.465 "transport_ack_timeout": 0, 00:46:45.465 "ctrlr_loss_timeout_sec": 0, 00:46:45.465 "reconnect_delay_sec": 0, 00:46:45.465 "fast_io_fail_timeout_sec": 0, 00:46:45.465 "disable_auto_failback": false, 00:46:45.465 "generate_uuids": false, 00:46:45.465 "transport_tos": 0, 00:46:45.465 "nvme_error_stat": false, 00:46:45.465 "rdma_srq_size": 0, 00:46:45.465 "io_path_stat": false, 00:46:45.465 "allow_accel_sequence": false, 00:46:45.465 "rdma_max_cq_size": 0, 00:46:45.465 "rdma_cm_event_timeout_ms": 0, 00:46:45.465 "dhchap_digests": [ 00:46:45.465 "sha256", 00:46:45.465 "sha384", 00:46:45.465 "sha512" 00:46:45.465 ], 00:46:45.465 "dhchap_dhgroups": [ 00:46:45.465 "null", 00:46:45.465 "ffdhe2048", 00:46:45.465 "ffdhe3072", 00:46:45.465 "ffdhe4096", 00:46:45.465 "ffdhe6144", 00:46:45.465 "ffdhe8192" 00:46:45.465 ], 00:46:45.465 "rdma_umr_per_io": false 00:46:45.465 } 00:46:45.465 }, 00:46:45.465 { 00:46:45.465 "method": "bdev_nvme_attach_controller", 00:46:45.465 "params": { 00:46:45.465 "name": "nvme0", 00:46:45.465 "trtype": "TCP", 00:46:45.465 "adrfam": "IPv4", 00:46:45.465 "traddr": "127.0.0.1", 00:46:45.465 "trsvcid": "4420", 00:46:45.465 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:45.465 "prchk_reftag": false, 00:46:45.465 "prchk_guard": false, 00:46:45.465 "ctrlr_loss_timeout_sec": 0, 00:46:45.465 "reconnect_delay_sec": 0, 00:46:45.465 "fast_io_fail_timeout_sec": 0, 00:46:45.465 "psk": "key0", 00:46:45.465 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:45.465 "hdgst": false, 00:46:45.465 "ddgst": false, 00:46:45.465 "multipath": "multipath" 00:46:45.465 } 00:46:45.465 }, 00:46:45.465 { 00:46:45.465 "method": "bdev_nvme_set_hotplug", 00:46:45.465 "params": { 00:46:45.465 "period_us": 100000, 00:46:45.465 "enable": false 00:46:45.465 } 00:46:45.465 }, 00:46:45.465 { 00:46:45.465 "method": "bdev_wait_for_examine" 00:46:45.465 } 00:46:45.465 ] 00:46:45.465 }, 00:46:45.465 { 00:46:45.465 "subsystem": "nbd", 00:46:45.465 "config": [] 00:46:45.465 } 00:46:45.465 ] 00:46:45.465 }' 00:46:45.465 03:57:46 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:45.465 03:57:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:45.465 [2024-12-13 03:57:46.606879] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:46:45.465 [2024-12-13 03:57:46.606974] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3045078 ] 00:46:45.723 [2024-12-13 03:57:46.721328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:45.723 [2024-12-13 03:57:46.831034] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:46:46.289 [2024-12-13 03:57:47.255771] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:46.289 03:57:47 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:46.289 03:57:47 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:46:46.289 03:57:47 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:46:46.289 03:57:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:46.289 03:57:47 keyring_file -- keyring/file.sh@121 -- # jq length 00:46:46.547 03:57:47 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:46:46.547 03:57:47 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:46:46.547 03:57:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:46.547 03:57:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:46.547 03:57:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:46.547 03:57:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:46.547 03:57:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:46.805 03:57:47 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:46:46.805 03:57:47 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:46:46.805 03:57:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:46.805 03:57:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:46.805 03:57:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:46.805 03:57:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:46.805 03:57:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:46.805 03:57:47 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:46:46.805 03:57:47 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:46:46.805 03:57:47 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:46:46.805 03:57:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:46:47.063 03:57:48 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:46:47.063 03:57:48 keyring_file -- keyring/file.sh@1 -- # cleanup 00:46:47.063 03:57:48 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.jWXAPLSc8c /tmp/tmp.wioA8dDuuS 00:46:47.063 03:57:48 keyring_file -- keyring/file.sh@20 -- # killprocess 3045078 00:46:47.063 03:57:48 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3045078 ']' 00:46:47.063 03:57:48 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3045078 00:46:47.063 03:57:48 keyring_file -- common/autotest_common.sh@959 -- # uname 00:46:47.063 03:57:48 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:47.063 03:57:48 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3045078 00:46:47.063 03:57:48 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:46:47.063 03:57:48 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:46:47.063 03:57:48 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3045078' 00:46:47.063 killing process with pid 3045078 00:46:47.063 03:57:48 keyring_file -- common/autotest_common.sh@973 -- # kill 3045078 00:46:47.063 Received shutdown signal, test time was about 1.000000 seconds 00:46:47.063 00:46:47.063 Latency(us) 00:46:47.063 [2024-12-13T02:57:48.272Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:47.063 [2024-12-13T02:57:48.272Z] =================================================================================================================== 00:46:47.063 [2024-12-13T02:57:48.272Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:46:47.063 03:57:48 keyring_file -- common/autotest_common.sh@978 -- # wait 3045078 00:46:47.997 03:57:49 keyring_file -- keyring/file.sh@21 -- # killprocess 3043198 00:46:47.997 03:57:49 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3043198 ']' 00:46:47.997 03:57:49 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3043198 00:46:47.998 03:57:49 keyring_file -- common/autotest_common.sh@959 -- # uname 00:46:47.998 03:57:49 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:47.998 03:57:49 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3043198 00:46:47.998 03:57:49 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:47.998 03:57:49 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:47.998 03:57:49 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3043198' 00:46:47.998 killing process with pid 3043198 00:46:47.998 03:57:49 keyring_file -- common/autotest_common.sh@973 -- # kill 3043198 00:46:47.998 03:57:49 keyring_file -- common/autotest_common.sh@978 -- # wait 3043198 00:46:50.527 00:46:50.527 real 0m16.524s 00:46:50.527 user 0m35.805s 00:46:50.527 sys 0m2.987s 00:46:50.527 03:57:51 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:50.527 03:57:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:50.527 ************************************ 00:46:50.527 END TEST keyring_file 00:46:50.527 ************************************ 00:46:50.527 03:57:51 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:46:50.527 03:57:51 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:46:50.527 03:57:51 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:46:50.527 03:57:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:50.527 03:57:51 -- common/autotest_common.sh@10 -- # set +x 00:46:50.527 ************************************ 00:46:50.527 START TEST keyring_linux 00:46:50.527 ************************************ 00:46:50.527 03:57:51 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:46:50.527 Joined session keyring: 407790589 00:46:50.527 * Looking for test storage... 00:46:50.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:46:50.527 03:57:51 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:46:50.527 03:57:51 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:46:50.528 03:57:51 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:46:50.786 03:57:51 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:46:50.787 03:57:51 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:50.787 03:57:51 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:50.787 03:57:51 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:50.787 03:57:51 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:46:50.787 03:57:51 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:46:50.787 03:57:51 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:46:50.787 03:57:51 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:46:50.787 03:57:51 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:46:50.787 03:57:51 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:46:50.787 03:57:51 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:46:50.787 03:57:51 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:50.787 03:57:51 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:46:50.787 03:57:51 keyring_linux -- scripts/common.sh@345 -- # : 1 00:46:50.787 03:57:51 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:50.787 03:57:51 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:50.787 03:57:51 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:46:50.787 03:57:51 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:46:50.787 03:57:51 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:50.787 03:57:51 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:46:50.787 03:57:51 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:46:50.787 03:57:51 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:46:50.787 03:57:51 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:46:50.787 03:57:51 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:50.787 03:57:51 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:46:50.787 03:57:51 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:46:50.787 03:57:51 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:50.787 03:57:51 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:50.787 03:57:51 keyring_linux -- scripts/common.sh@368 -- # return 0 00:46:50.787 03:57:51 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:50.787 03:57:51 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:46:50.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:50.787 --rc genhtml_branch_coverage=1 00:46:50.787 --rc genhtml_function_coverage=1 00:46:50.787 --rc genhtml_legend=1 00:46:50.787 --rc geninfo_all_blocks=1 00:46:50.787 --rc geninfo_unexecuted_blocks=1 00:46:50.787 00:46:50.787 ' 00:46:50.787 03:57:51 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:46:50.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:50.787 --rc genhtml_branch_coverage=1 00:46:50.787 --rc genhtml_function_coverage=1 00:46:50.787 --rc genhtml_legend=1 00:46:50.787 --rc geninfo_all_blocks=1 00:46:50.787 --rc geninfo_unexecuted_blocks=1 00:46:50.787 00:46:50.787 ' 00:46:50.787 03:57:51 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:46:50.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:50.787 --rc genhtml_branch_coverage=1 00:46:50.787 --rc genhtml_function_coverage=1 00:46:50.787 --rc genhtml_legend=1 00:46:50.787 --rc geninfo_all_blocks=1 00:46:50.787 --rc geninfo_unexecuted_blocks=1 00:46:50.787 00:46:50.787 ' 00:46:50.787 03:57:51 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:46:50.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:50.787 --rc genhtml_branch_coverage=1 00:46:50.787 --rc genhtml_function_coverage=1 00:46:50.787 --rc genhtml_legend=1 00:46:50.787 --rc geninfo_all_blocks=1 00:46:50.787 --rc geninfo_unexecuted_blocks=1 00:46:50.787 00:46:50.787 ' 00:46:50.787 03:57:51 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:46:50.787 03:57:51 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:50.787 03:57:51 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:46:50.787 03:57:51 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:50.787 03:57:51 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:50.787 03:57:51 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:50.787 03:57:51 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:50.787 03:57:51 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:50.787 03:57:51 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:50.787 03:57:51 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:50.787 03:57:51 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:50.787 03:57:51 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:50.787 03:57:51 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:50.787 03:57:51 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:46:50.787 03:57:51 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:46:50.787 03:57:51 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:50.787 03:57:51 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:50.787 03:57:51 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:50.787 03:57:51 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:50.787 03:57:51 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:50.787 03:57:51 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:46:50.787 03:57:51 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:50.787 03:57:51 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:50.787 03:57:51 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:50.787 03:57:51 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:50.787 03:57:51 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:50.787 03:57:51 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:50.787 03:57:51 keyring_linux -- paths/export.sh@5 -- # export PATH 00:46:50.787 03:57:51 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:50.787 03:57:51 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:46:50.787 03:57:51 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:50.787 03:57:51 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:50.787 03:57:51 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:50.787 03:57:51 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:50.787 03:57:51 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:50.787 03:57:51 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:50.787 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:50.787 03:57:51 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:50.787 03:57:51 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:50.787 03:57:51 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:50.787 03:57:51 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:46:50.787 03:57:51 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:46:50.787 03:57:51 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:46:50.787 03:57:51 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:46:50.787 03:57:51 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:46:50.787 03:57:51 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:46:50.787 03:57:51 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:46:50.787 03:57:51 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:46:50.787 03:57:51 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:46:50.787 03:57:51 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:46:50.787 03:57:51 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:46:50.787 03:57:51 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:46:50.787 03:57:51 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:46:50.787 03:57:51 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:46:50.787 03:57:51 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:46:50.787 03:57:51 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:46:50.787 03:57:51 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:46:50.787 03:57:51 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:46:50.787 03:57:51 keyring_linux -- nvmf/common.sh@733 -- # python - 00:46:50.787 03:57:51 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:46:50.787 03:57:51 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:46:50.787 /tmp/:spdk-test:key0 00:46:50.787 03:57:51 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:46:50.787 03:57:51 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:46:50.787 03:57:51 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:46:50.787 03:57:51 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:46:50.787 03:57:51 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:46:50.787 03:57:51 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:46:50.787 03:57:51 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:46:50.787 03:57:51 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:46:50.787 03:57:51 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:46:50.787 03:57:51 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:46:50.788 03:57:51 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:46:50.788 03:57:51 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:46:50.788 03:57:51 keyring_linux -- nvmf/common.sh@733 -- # python - 00:46:50.788 03:57:51 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:46:50.788 03:57:51 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:46:50.788 /tmp/:spdk-test:key1 00:46:50.788 03:57:51 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3045910 00:46:50.788 03:57:51 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3045910 00:46:50.788 03:57:51 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3045910 ']' 00:46:50.788 03:57:51 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:50.788 03:57:51 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:50.788 03:57:51 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:50.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:50.788 03:57:51 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:50.788 03:57:51 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:46:50.788 03:57:51 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:46:50.788 [2024-12-13 03:57:51.968221] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:46:50.788 [2024-12-13 03:57:51.968315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3045910 ] 00:46:51.046 [2024-12-13 03:57:52.081386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:51.046 [2024-12-13 03:57:52.186860] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:46:51.981 03:57:52 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:51.981 03:57:52 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:46:51.981 03:57:52 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:46:51.981 03:57:52 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:51.981 03:57:52 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:46:51.981 [2024-12-13 03:57:53.001222] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:51.981 null0 00:46:51.981 [2024-12-13 03:57:53.033261] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:51.981 [2024-12-13 03:57:53.033613] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:46:51.981 03:57:53 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:51.981 03:57:53 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:46:51.981 952557218 00:46:51.981 03:57:53 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:46:51.981 256439328 00:46:51.981 03:57:53 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3046134 00:46:51.981 03:57:53 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3046134 /var/tmp/bperf.sock 00:46:51.981 03:57:53 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3046134 ']' 00:46:51.981 03:57:53 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:46:51.981 03:57:53 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:51.981 03:57:53 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:46:51.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:46:51.981 03:57:53 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:51.981 03:57:53 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:46:51.981 03:57:53 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:46:51.981 [2024-12-13 03:57:53.131039] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:46:51.981 [2024-12-13 03:57:53.131141] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3046134 ] 00:46:52.239 [2024-12-13 03:57:53.241405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:52.239 [2024-12-13 03:57:53.354444] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:46:52.805 03:57:53 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:52.805 03:57:53 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:46:52.805 03:57:53 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:46:52.805 03:57:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:46:53.063 03:57:54 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:46:53.063 03:57:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:46:53.630 03:57:54 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:46:53.630 03:57:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:46:53.630 [2024-12-13 03:57:54.764943] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:53.921 nvme0n1 00:46:53.921 03:57:54 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:46:53.921 03:57:54 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:46:53.921 03:57:54 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:46:53.921 03:57:54 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:46:53.921 03:57:54 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:46:53.921 03:57:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:53.921 03:57:55 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:46:53.921 03:57:55 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:46:53.921 03:57:55 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:46:53.921 03:57:55 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:46:53.921 03:57:55 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:53.921 03:57:55 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:46:53.921 03:57:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:54.250 03:57:55 keyring_linux -- keyring/linux.sh@25 -- # sn=952557218 00:46:54.250 03:57:55 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:46:54.250 03:57:55 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:46:54.250 03:57:55 keyring_linux -- keyring/linux.sh@26 -- # [[ 952557218 == \9\5\2\5\5\7\2\1\8 ]] 00:46:54.250 03:57:55 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 952557218 00:46:54.250 03:57:55 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:46:54.250 03:57:55 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:46:54.250 Running I/O for 1 seconds... 00:46:55.213 15944.00 IOPS, 62.28 MiB/s 00:46:55.213 Latency(us) 00:46:55.213 [2024-12-13T02:57:56.422Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:55.213 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:46:55.213 nvme0n1 : 1.01 15943.51 62.28 0.00 0.00 7994.73 6834.47 14480.34 00:46:55.213 [2024-12-13T02:57:56.422Z] =================================================================================================================== 00:46:55.213 [2024-12-13T02:57:56.422Z] Total : 15943.51 62.28 0.00 0.00 7994.73 6834.47 14480.34 00:46:55.213 { 00:46:55.213 "results": [ 00:46:55.213 { 00:46:55.213 "job": "nvme0n1", 00:46:55.213 "core_mask": "0x2", 00:46:55.213 "workload": "randread", 00:46:55.213 "status": "finished", 00:46:55.213 "queue_depth": 128, 00:46:55.213 "io_size": 4096, 00:46:55.213 "runtime": 1.008059, 00:46:55.213 "iops": 15943.51124289352, 00:46:55.213 "mibps": 62.279340792552816, 00:46:55.213 "io_failed": 0, 00:46:55.213 "io_timeout": 0, 00:46:55.213 "avg_latency_us": 7994.732973997962, 00:46:55.213 "min_latency_us": 6834.4685714285715, 00:46:55.213 "max_latency_us": 14480.335238095238 00:46:55.213 } 00:46:55.213 ], 00:46:55.213 "core_count": 1 00:46:55.213 } 00:46:55.213 03:57:56 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:46:55.213 03:57:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:46:55.472 03:57:56 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:46:55.472 03:57:56 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:46:55.472 03:57:56 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:46:55.472 03:57:56 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:46:55.472 03:57:56 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:46:55.472 03:57:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:55.731 03:57:56 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:46:55.731 03:57:56 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:46:55.731 03:57:56 keyring_linux -- keyring/linux.sh@23 -- # return 00:46:55.731 03:57:56 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:46:55.731 03:57:56 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:46:55.731 03:57:56 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:46:55.731 03:57:56 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:46:55.731 03:57:56 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:55.731 03:57:56 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:46:55.731 03:57:56 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:55.731 03:57:56 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:46:55.731 03:57:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:46:55.990 [2024-12-13 03:57:56.956737] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:46:55.990 [2024-12-13 03:57:56.957255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032ad00 (107): Transport endpoint is not connected 00:46:55.990 [2024-12-13 03:57:56.958237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032ad00 (9): Bad file descriptor 00:46:55.990 [2024-12-13 03:57:56.959235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:46:55.990 [2024-12-13 03:57:56.959260] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:46:55.990 [2024-12-13 03:57:56.959275] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:46:55.990 [2024-12-13 03:57:56.959287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:46:55.990 request: 00:46:55.990 { 00:46:55.990 "name": "nvme0", 00:46:55.990 "trtype": "tcp", 00:46:55.990 "traddr": "127.0.0.1", 00:46:55.990 "adrfam": "ipv4", 00:46:55.990 "trsvcid": "4420", 00:46:55.990 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:55.990 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:55.990 "prchk_reftag": false, 00:46:55.990 "prchk_guard": false, 00:46:55.990 "hdgst": false, 00:46:55.990 "ddgst": false, 00:46:55.990 "psk": ":spdk-test:key1", 00:46:55.990 "allow_unrecognized_csi": false, 00:46:55.990 "method": "bdev_nvme_attach_controller", 00:46:55.990 "req_id": 1 00:46:55.990 } 00:46:55.990 Got JSON-RPC error response 00:46:55.990 response: 00:46:55.990 { 00:46:55.990 "code": -5, 00:46:55.990 "message": "Input/output error" 00:46:55.990 } 00:46:55.990 03:57:56 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:46:55.990 03:57:56 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:55.990 03:57:56 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:55.990 03:57:56 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:55.990 03:57:56 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:46:55.990 03:57:56 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:46:55.990 03:57:56 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:46:55.990 03:57:56 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:46:55.990 03:57:56 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:46:55.990 03:57:56 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:46:55.990 03:57:56 keyring_linux -- keyring/linux.sh@33 -- # sn=952557218 00:46:55.990 03:57:56 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 952557218 00:46:55.990 1 links removed 00:46:55.990 03:57:56 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:46:55.990 03:57:56 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:46:55.990 03:57:56 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:46:55.990 03:57:56 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:46:55.990 03:57:56 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:46:55.990 03:57:56 keyring_linux -- keyring/linux.sh@33 -- # sn=256439328 00:46:55.990 03:57:56 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 256439328 00:46:55.990 1 links removed 00:46:55.990 03:57:56 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3046134 00:46:55.990 03:57:56 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3046134 ']' 00:46:55.990 03:57:56 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3046134 00:46:55.990 03:57:56 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:46:55.990 03:57:56 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:55.990 03:57:56 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3046134 00:46:55.990 03:57:57 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:46:55.990 03:57:57 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:46:55.990 03:57:57 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3046134' 00:46:55.990 killing process with pid 3046134 00:46:55.990 03:57:57 keyring_linux -- common/autotest_common.sh@973 -- # kill 3046134 00:46:55.990 Received shutdown signal, test time was about 1.000000 seconds 00:46:55.990 00:46:55.990 Latency(us) 00:46:55.990 [2024-12-13T02:57:57.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:55.990 [2024-12-13T02:57:57.199Z] =================================================================================================================== 00:46:55.990 [2024-12-13T02:57:57.199Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:55.990 03:57:57 keyring_linux -- common/autotest_common.sh@978 -- # wait 3046134 00:46:56.926 03:57:57 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3045910 00:46:56.926 03:57:57 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3045910 ']' 00:46:56.926 03:57:57 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3045910 00:46:56.926 03:57:57 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:46:56.926 03:57:57 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:56.926 03:57:57 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3045910 00:46:56.926 03:57:57 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:56.926 03:57:57 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:56.926 03:57:57 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3045910' 00:46:56.926 killing process with pid 3045910 00:46:56.926 03:57:57 keyring_linux -- common/autotest_common.sh@973 -- # kill 3045910 00:46:56.926 03:57:57 keyring_linux -- common/autotest_common.sh@978 -- # wait 3045910 00:46:59.460 00:46:59.460 real 0m8.688s 00:46:59.460 user 0m14.199s 00:46:59.460 sys 0m1.633s 00:46:59.460 03:58:00 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:59.460 03:58:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:46:59.460 ************************************ 00:46:59.460 END TEST keyring_linux 00:46:59.460 ************************************ 00:46:59.460 03:58:00 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:46:59.460 03:58:00 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:46:59.460 03:58:00 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:46:59.460 03:58:00 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:46:59.460 03:58:00 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:46:59.460 03:58:00 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:46:59.460 03:58:00 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:46:59.460 03:58:00 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:46:59.460 03:58:00 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:46:59.460 03:58:00 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:46:59.460 03:58:00 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:46:59.460 03:58:00 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:46:59.460 03:58:00 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:46:59.460 03:58:00 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:46:59.460 03:58:00 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:46:59.460 03:58:00 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:46:59.460 03:58:00 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:46:59.460 03:58:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:59.460 03:58:00 -- common/autotest_common.sh@10 -- # set +x 00:46:59.460 03:58:00 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:46:59.460 03:58:00 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:46:59.460 03:58:00 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:46:59.460 03:58:00 -- common/autotest_common.sh@10 -- # set +x 00:47:04.733 INFO: APP EXITING 00:47:04.733 INFO: killing all VMs 00:47:04.733 INFO: killing vhost app 00:47:04.733 INFO: EXIT DONE 00:47:06.112 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:47:06.112 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:47:06.112 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:47:06.112 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:47:06.112 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:47:06.112 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:47:06.112 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:47:06.112 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:47:06.112 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:47:06.112 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:47:06.370 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:47:06.370 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:47:06.370 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:47:06.370 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:47:06.370 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:47:06.370 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:47:06.370 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:47:08.904 Cleaning 00:47:08.904 Removing: /var/run/dpdk/spdk0/config 00:47:08.904 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:47:08.904 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:47:08.904 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:47:08.904 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:47:08.904 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:47:08.904 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:47:08.904 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:47:08.904 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:47:08.904 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:47:08.904 Removing: /var/run/dpdk/spdk0/hugepage_info 00:47:08.904 Removing: /var/run/dpdk/spdk1/config 00:47:08.904 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:47:08.904 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:47:08.904 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:47:08.904 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:47:08.904 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:47:08.904 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:47:08.904 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:47:08.904 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:47:08.904 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:47:08.904 Removing: /var/run/dpdk/spdk1/hugepage_info 00:47:08.904 Removing: /var/run/dpdk/spdk2/config 00:47:08.904 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:47:08.904 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:47:08.904 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:47:08.904 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:47:08.904 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:47:08.904 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:47:08.904 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:47:08.904 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:47:08.904 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:47:08.904 Removing: /var/run/dpdk/spdk2/hugepage_info 00:47:08.904 Removing: /var/run/dpdk/spdk3/config 00:47:08.904 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:47:08.904 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:47:08.904 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:47:08.904 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:47:08.904 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:47:08.905 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:47:08.905 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:47:08.905 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:47:08.905 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:47:08.905 Removing: /var/run/dpdk/spdk3/hugepage_info 00:47:08.905 Removing: /var/run/dpdk/spdk4/config 00:47:08.905 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:47:08.905 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:47:08.905 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:47:09.164 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:47:09.164 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:47:09.164 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:47:09.164 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:47:09.164 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:47:09.164 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:47:09.164 Removing: /var/run/dpdk/spdk4/hugepage_info 00:47:09.164 Removing: /dev/shm/bdev_svc_trace.1 00:47:09.164 Removing: /dev/shm/nvmf_trace.0 00:47:09.165 Removing: /dev/shm/spdk_tgt_trace.pid2455198 00:47:09.165 Removing: /var/run/dpdk/spdk0 00:47:09.165 Removing: /var/run/dpdk/spdk1 00:47:09.165 Removing: /var/run/dpdk/spdk2 00:47:09.165 Removing: /var/run/dpdk/spdk3 00:47:09.165 Removing: /var/run/dpdk/spdk4 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2451348 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2452813 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2455198 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2456196 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2457838 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2458627 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2459812 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2460042 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2460833 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2462533 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2464002 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2464815 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2465487 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2466288 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2466960 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2467298 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2467639 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2467953 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2468902 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2472288 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2472990 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2473691 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2473920 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2475533 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2475764 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2477527 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2477615 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2478295 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2478521 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2479008 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2479234 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2480685 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2480939 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2481432 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2485540 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2489987 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2500961 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2501556 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2505958 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2506313 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2510958 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2517134 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2519896 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2531009 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2540229 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2542745 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2543867 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2561536 00:47:09.165 Removing: /var/run/dpdk/spdk_pid2565984 00:47:09.424 Removing: /var/run/dpdk/spdk_pid2650032 00:47:09.424 Removing: /var/run/dpdk/spdk_pid2655478 00:47:09.424 Removing: /var/run/dpdk/spdk_pid2661419 00:47:09.424 Removing: /var/run/dpdk/spdk_pid2671867 00:47:09.424 Removing: /var/run/dpdk/spdk_pid2700662 00:47:09.424 Removing: /var/run/dpdk/spdk_pid2705802 00:47:09.424 Removing: /var/run/dpdk/spdk_pid2707378 00:47:09.424 Removing: /var/run/dpdk/spdk_pid2709378 00:47:09.424 Removing: /var/run/dpdk/spdk_pid2709826 00:47:09.424 Removing: /var/run/dpdk/spdk_pid2710165 00:47:09.424 Removing: /var/run/dpdk/spdk_pid2710523 00:47:09.424 Removing: /var/run/dpdk/spdk_pid2711462 00:47:09.424 Removing: /var/run/dpdk/spdk_pid2713469 00:47:09.424 Removing: /var/run/dpdk/spdk_pid2715084 00:47:09.424 Removing: /var/run/dpdk/spdk_pid2715824 00:47:09.424 Removing: /var/run/dpdk/spdk_pid2718312 00:47:09.424 Removing: /var/run/dpdk/spdk_pid2719240 00:47:09.424 Removing: /var/run/dpdk/spdk_pid2720255 00:47:09.424 Removing: /var/run/dpdk/spdk_pid2724592 00:47:09.424 Removing: /var/run/dpdk/spdk_pid2730313 00:47:09.424 Removing: /var/run/dpdk/spdk_pid2730314 00:47:09.424 Removing: /var/run/dpdk/spdk_pid2730315 00:47:09.424 Removing: /var/run/dpdk/spdk_pid2734380 00:47:09.424 Removing: /var/run/dpdk/spdk_pid2738388 00:47:09.424 Removing: /var/run/dpdk/spdk_pid2743797 00:47:09.424 Removing: /var/run/dpdk/spdk_pid2780606 00:47:09.424 Removing: /var/run/dpdk/spdk_pid2784880 00:47:09.424 Removing: /var/run/dpdk/spdk_pid2790994 00:47:09.424 Removing: /var/run/dpdk/spdk_pid2793125 00:47:09.424 Removing: /var/run/dpdk/spdk_pid2795311 00:47:09.424 Removing: /var/run/dpdk/spdk_pid2797268 00:47:09.424 Removing: /var/run/dpdk/spdk_pid2802323 00:47:09.425 Removing: /var/run/dpdk/spdk_pid2807276 00:47:09.425 Removing: /var/run/dpdk/spdk_pid2811673 00:47:09.425 Removing: /var/run/dpdk/spdk_pid2819373 00:47:09.425 Removing: /var/run/dpdk/spdk_pid2819551 00:47:09.425 Removing: /var/run/dpdk/spdk_pid2824729 00:47:09.425 Removing: /var/run/dpdk/spdk_pid2825084 00:47:09.425 Removing: /var/run/dpdk/spdk_pid2825272 00:47:09.425 Removing: /var/run/dpdk/spdk_pid2825749 00:47:09.425 Removing: /var/run/dpdk/spdk_pid2825859 00:47:09.425 Removing: /var/run/dpdk/spdk_pid2827222 00:47:09.425 Removing: /var/run/dpdk/spdk_pid2828989 00:47:09.425 Removing: /var/run/dpdk/spdk_pid2830549 00:47:09.425 Removing: /var/run/dpdk/spdk_pid2832140 00:47:09.425 Removing: /var/run/dpdk/spdk_pid2833872 00:47:09.425 Removing: /var/run/dpdk/spdk_pid2835435 00:47:09.425 Removing: /var/run/dpdk/spdk_pid2841512 00:47:09.425 Removing: /var/run/dpdk/spdk_pid2842148 00:47:09.425 Removing: /var/run/dpdk/spdk_pid2843847 00:47:09.425 Removing: /var/run/dpdk/spdk_pid2845005 00:47:09.425 Removing: /var/run/dpdk/spdk_pid2851073 00:47:09.425 Removing: /var/run/dpdk/spdk_pid2853797 00:47:09.425 Removing: /var/run/dpdk/spdk_pid2859653 00:47:09.425 Removing: /var/run/dpdk/spdk_pid2865621 00:47:09.425 Removing: /var/run/dpdk/spdk_pid2874523 00:47:09.425 Removing: /var/run/dpdk/spdk_pid2881845 00:47:09.425 Removing: /var/run/dpdk/spdk_pid2881849 00:47:09.425 Removing: /var/run/dpdk/spdk_pid2900479 00:47:09.425 Removing: /var/run/dpdk/spdk_pid2901175 00:47:09.425 Removing: /var/run/dpdk/spdk_pid2902064 00:47:09.425 Removing: /var/run/dpdk/spdk_pid2902754 00:47:09.425 Removing: /var/run/dpdk/spdk_pid2904207 00:47:09.425 Removing: /var/run/dpdk/spdk_pid2905224 00:47:09.425 Removing: /var/run/dpdk/spdk_pid2906020 00:47:09.684 Removing: /var/run/dpdk/spdk_pid2906908 00:47:09.684 Removing: /var/run/dpdk/spdk_pid2911321 00:47:09.684 Removing: /var/run/dpdk/spdk_pid2911769 00:47:09.684 Removing: /var/run/dpdk/spdk_pid2917940 00:47:09.684 Removing: /var/run/dpdk/spdk_pid2918216 00:47:09.684 Removing: /var/run/dpdk/spdk_pid2923784 00:47:09.684 Removing: /var/run/dpdk/spdk_pid2928166 00:47:09.684 Removing: /var/run/dpdk/spdk_pid2937744 00:47:09.684 Removing: /var/run/dpdk/spdk_pid2938411 00:47:09.684 Removing: /var/run/dpdk/spdk_pid2942622 00:47:09.684 Removing: /var/run/dpdk/spdk_pid2943046 00:47:09.684 Removing: /var/run/dpdk/spdk_pid2947493 00:47:09.684 Removing: /var/run/dpdk/spdk_pid2953911 00:47:09.684 Removing: /var/run/dpdk/spdk_pid2956580 00:47:09.684 Removing: /var/run/dpdk/spdk_pid2967181 00:47:09.684 Removing: /var/run/dpdk/spdk_pid2976214 00:47:09.684 Removing: /var/run/dpdk/spdk_pid2978036 00:47:09.684 Removing: /var/run/dpdk/spdk_pid2979040 00:47:09.684 Removing: /var/run/dpdk/spdk_pid2996389 00:47:09.684 Removing: /var/run/dpdk/spdk_pid3000571 00:47:09.684 Removing: /var/run/dpdk/spdk_pid3003436 00:47:09.684 Removing: /var/run/dpdk/spdk_pid3011221 00:47:09.684 Removing: /var/run/dpdk/spdk_pid3011292 00:47:09.684 Removing: /var/run/dpdk/spdk_pid3016384 00:47:09.684 Removing: /var/run/dpdk/spdk_pid3018521 00:47:09.684 Removing: /var/run/dpdk/spdk_pid3020645 00:47:09.684 Removing: /var/run/dpdk/spdk_pid3022057 00:47:09.684 Removing: /var/run/dpdk/spdk_pid3024236 00:47:09.684 Removing: /var/run/dpdk/spdk_pid3025485 00:47:09.684 Removing: /var/run/dpdk/spdk_pid3034284 00:47:09.684 Removing: /var/run/dpdk/spdk_pid3034870 00:47:09.684 Removing: /var/run/dpdk/spdk_pid3035516 00:47:09.684 Removing: /var/run/dpdk/spdk_pid3038450 00:47:09.684 Removing: /var/run/dpdk/spdk_pid3038999 00:47:09.684 Removing: /var/run/dpdk/spdk_pid3039452 00:47:09.684 Removing: /var/run/dpdk/spdk_pid3043198 00:47:09.684 Removing: /var/run/dpdk/spdk_pid3043426 00:47:09.684 Removing: /var/run/dpdk/spdk_pid3045078 00:47:09.684 Removing: /var/run/dpdk/spdk_pid3045910 00:47:09.684 Removing: /var/run/dpdk/spdk_pid3046134 00:47:09.684 Clean 00:47:09.684 03:58:10 -- common/autotest_common.sh@1453 -- # return 0 00:47:09.684 03:58:10 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:47:09.684 03:58:10 -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:09.684 03:58:10 -- common/autotest_common.sh@10 -- # set +x 00:47:09.684 03:58:10 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:47:09.684 03:58:10 -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:09.684 03:58:10 -- common/autotest_common.sh@10 -- # set +x 00:47:09.944 03:58:10 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:47:09.944 03:58:10 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:47:09.944 03:58:10 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:47:09.944 03:58:10 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:47:09.944 03:58:10 -- spdk/autotest.sh@398 -- # hostname 00:47:09.944 03:58:10 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-04 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:47:09.944 geninfo: WARNING: invalid characters removed from testname! 00:47:31.884 03:58:30 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:32.143 03:58:33 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:34.047 03:58:35 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:35.949 03:58:36 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:37.851 03:58:38 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:39.228 03:58:40 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:41.133 03:58:42 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:47:41.133 03:58:42 -- spdk/autorun.sh@1 -- $ timing_finish 00:47:41.133 03:58:42 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:47:41.133 03:58:42 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:47:41.133 03:58:42 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:47:41.133 03:58:42 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:47:41.133 + [[ -n 2373080 ]] 00:47:41.133 + sudo kill 2373080 00:47:41.143 [Pipeline] } 00:47:41.158 [Pipeline] // stage 00:47:41.164 [Pipeline] } 00:47:41.179 [Pipeline] // timeout 00:47:41.184 [Pipeline] } 00:47:41.198 [Pipeline] // catchError 00:47:41.203 [Pipeline] } 00:47:41.218 [Pipeline] // wrap 00:47:41.224 [Pipeline] } 00:47:41.237 [Pipeline] // catchError 00:47:41.246 [Pipeline] stage 00:47:41.248 [Pipeline] { (Epilogue) 00:47:41.260 [Pipeline] catchError 00:47:41.262 [Pipeline] { 00:47:41.274 [Pipeline] echo 00:47:41.276 Cleanup processes 00:47:41.281 [Pipeline] sh 00:47:41.567 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:47:41.567 3058385 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:47:41.580 [Pipeline] sh 00:47:41.865 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:47:41.865 ++ grep -v 'sudo pgrep' 00:47:41.865 ++ awk '{print $1}' 00:47:41.865 + sudo kill -9 00:47:41.865 + true 00:47:41.877 [Pipeline] sh 00:47:42.162 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:47:54.394 [Pipeline] sh 00:47:54.677 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:47:54.677 Artifacts sizes are good 00:47:54.692 [Pipeline] archiveArtifacts 00:47:54.699 Archiving artifacts 00:47:54.871 [Pipeline] sh 00:47:55.190 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:47:55.204 [Pipeline] cleanWs 00:47:55.214 [WS-CLEANUP] Deleting project workspace... 00:47:55.214 [WS-CLEANUP] Deferred wipeout is used... 00:47:55.221 [WS-CLEANUP] done 00:47:55.223 [Pipeline] } 00:47:55.240 [Pipeline] // catchError 00:47:55.252 [Pipeline] sh 00:47:55.536 + logger -p user.info -t JENKINS-CI 00:47:55.545 [Pipeline] } 00:47:55.557 [Pipeline] // stage 00:47:55.563 [Pipeline] } 00:47:55.576 [Pipeline] // node 00:47:55.582 [Pipeline] End of Pipeline 00:47:55.625 Finished: SUCCESS